Kubernetes Kit enables scalability, high availability, and non-disruptive rolling updates for Vaadin applications running in a Kubernetes cluster.
This tutorial will guide you through setting up and deploying an application with Kubernetes Kit in a local Kubernetes cluster.
Download a new Vaadin project from https://start.vaadin.com/.
To get started, you first need to add Kubernetes Kit as a dependency to the application:
<dependency> <groupId>com.vaadin</groupId> <artifactId>kubernetes-kit-starter</artifactId> </dependency>
Add the following to the application configuration file:
The second property enables session serialization debug tool while in development.
Kubernetes Kit enables high availability and the possibility to scale applications up and down in a Kubernetes cluster by storing session data in a backend that is accessible to the cluster. Note that you don’t need to enable session replication if you are only interested in using the rolling update feature of the Kit.
This tutorial uses Hazelcast for this purpose. However, Redis is also supported.
You will need to add the Hazelcast dependency to the application:
<dependency> <groupId>com.hazelcast</groupId> <artifactId>hazelcast</artifactId> </dependency>
Next, you need to add the following to the application configuration file:
Now, deploy the Hazelcast role to the cluster:
kubectl apply -f https://raw.githubusercontent.com/hazelcast/hazelcast/master/kubernetes-rbac.yaml
If you want to deploy this to another namespace than
Then you need to deploy a load balancer service to your cluster. Create the following Kubernetes manifest file:
apiVersion: v1 kind: Service metadata: name: hazelcast-service spec: selector: app: my-app ports: - name: hazelcast port: 5701 type: LoadBalancer
You will also need to deploy the manifest to your cluster:
kubectl apply -f hazelcast.yaml
You should now see the load balancer service:
kubectl get svc hazelcast-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hazelcast-service LoadBalancer 10.96.178.190 <pending> 5701:31516/TCP 18h
The next step is to build a container image of the application and deploy it to your Kubernetes cluster.
You’ll first need to clean and package the application for production:
mvn clean package -Pproduction
Then create the following
Dockerfile file and place it in the main directory of the application:
FROM openjdk:17-jdk-slim COPY target/*.jar /usr/app/app.jar RUN useradd -m myuser USER myuser EXPOSE 8080 CMD java -jar /usr/app/app.jar
Now, open a terminal to the main directory and use Docker to build a container image for the application and tag it with version 1.0.0. Note, the required period at the end in the line here:
docker build -t my-app:1.0.0 .
Depending on the Kubernetes cluster you are using, you may need to publish the image to a local registry or push the image to the cluster. Otherwise, the image will not be found. Please refer to your cluster documentation for more information on this.
If you’re using "kind" on a local machine, you need to load the image to the cluster like this:
Now create a deployment manifest for the application:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v1 spec: replicas: 4 selector: matchLabels: app: my-app version: 1.0.0 template: metadata: labels: app: my-app version: 1.0.0 spec: containers: - name: my-app image: my-app:1.0.0 # Sets the APP_VERSION environment variable for the container which is # used during the version update to compare with the new version env: - name: APP_VERSION value: 1.0.0 ports: - name: http containerPort: 8080 - name: multicast containerPort: 5701 --- apiVersion: v1 kind: Service metadata: name: my-app-v1 spec: selector: app: my-app version: 1.0.0 ports: - name: http port: 80 targetPort: http
|The multicast port (5701) is only used for session replication using Hazelcast.|
Deploy the manifest to your cluster:
kubectl apply -f app-v1.yaml
You should now see 4 pods running in the cluster. Below is an example of how this might look:
kubectl get pods
NAME READY STATUS RESTARTS AGE my-app-v1-f87bfcbb4-5qjml 1/1 Running 0 22s my-app-v1-f87bfcbb4-czkzr 1/1 Running 0 22s my-app-v1-f87bfcbb4-gjqw6 1/1 Running 0 22s my-app-v1-f87bfcbb4-rxvjb 1/1 Running 0 22s
In order to access the application, you need to provide some ingress rules.
If you don’t already have
ingress-nginx installed in your cluster, install it with the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
Then create an ingress rule manifest file like so:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app annotations: kubernetes.io/ingress.class: "nginx" # --- Optional --- # If server Push is enabled in the application and uses Websocket for transport, # these settings replace the default Websocket connection timeouts in Ngnix. nginx.ingress.kubernetes.io/proxy-send-timeout: "86400" nginx.ingress.kubernetes.io/proxy-read-timeout: "86400" # --- nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/affinity-mode: "persistent" spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: my-app-v1 port: number: 80
You may deploy the manifest to your cluster like this:
kubectl apply -f ingress-v1.yaml
The application should now be available at http://localhost
In order to access the application from your local machine, it may be necessary to use the
The application should now be available at http://localhost:8080
You can use
kubectl commands to increase or reduce the amount of pods used by the deployment. For example, the following command increases the number of pods to 5:
kubectl scale deployment/my-app-v1 --replicas=5
You can also simulate the failure of a specific pod by deleting it by name like so:
kubectl delete pod/<pod-name>
Replace placeholder pod nameRemember to substitute the name of your application pod.
If you have enabled session replication, this can be used to check that it is performing as expected. If you open the application and then delete the pod to which it’s connected, when you perform the next action you should not lose session data.
The Kubernetes Kit can also help you roll out a new version of your application in a Kubernetes cluster.