Documentation

Documentation versions (currently viewingVaadin 23)

You are viewing documentation for Vaadin 23. View latest documentation

Getting Started with Kubernetes Kit

Step-by-step guide showing how to enable scalability, high availability, and non-disruptive rolling updates for your application using Kubernetes Kit.

This tutorial guides you through setting up and deploying an application with Kubernetes Kit in a local Kubernetes cluster.

1. Requirements

This tutorial assumes that you have the following software installed on your local machine:

2. Set Up a Vaadin Project

Download a new Vaadin project from start.vaadin.com.

3. Add the Kubernetes Kit Dependency

  1. To get started, add Kubernetes Kit as a dependency to the project:

    <dependency>
      <groupId>com.vaadin</groupId>
      <artifactId>kubernetes-kit-starter</artifactId>
    </dependency>
  2. Add the following to the application configuration file:

    # (1)
    vaadin.devmode.sessionSerialization.enabled=true
    # (2)
    vaadin.serialization.transients.include-packages=com.example.application
    1. This property enables the session serialization debug tool during development.

    2. This property defines the classes which should be inspected for transient fields during session serialization. In this case, inspection is limited to classes within the starter project. For more information, see Session Replication.

4. Session Replication Backend

You don’t need to enable session replication if you only need rolling updates.

High availability and the possibility to scale applications up and down in a cluster are enabled by storing session data in a backend that is accessible to the cluster. This tutorial uses Hazelcast for this purpose. However, Redis is also supported.

  1. Add the Hazelcast dependency to the project:

    <dependency>
        <groupId>com.hazelcast</groupId>
        <artifactId>hazelcast</artifactId>
    </dependency>
  2. Add the following property to the application configuration file:

    vaadin.kubernetes.hazelcast.service-name=hazelcast-service
  3. Deploy the Hazelcast service to the cluster by running the following command:

    kubectl apply -f https://raw.githubusercontent.com/hazelcast/hazelcast/master/kubernetes-rbac.yaml
    Note
    Deploying to Another Namespace

    If you want to deploy to another namespace than default, you need to download the kubernetes-rbac.yaml file and edit the hard-coded namespace. Then deploy to your cluster like so:

    kubectl apply -f path/to/custom/kubernetes-rbac.yaml
  4. Deploy a load balancer service to your cluster. Create the following Kubernetes manifest file:

    apiVersion: v1
    kind: Service
    metadata:
      name: hazelcast-service
    spec:
      selector:
        app: my-app
      ports:
        - name: hazelcast
          port: 5701
      type: LoadBalancer

    Then deploy the manifest to your cluster:

    kubectl apply -f hazelcast.yaml
  5. Run the following command to see that the load balancer service is running:

    kubectl get svc hazelcast-service

    You should see the following output (the IP number can be different):

    NAME                TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    hazelcast-service   LoadBalancer   10.96.178.190   <pending>     5701:31516/TCP   18h

5. Build and Deploy the Application

The next step is to build a container image of the application and deploy it to your Kubernetes cluster.

  1. Clean the project and create a production build of the application:

    mvn clean package -Pproduction
  2. Create the following Dockerfile file in the project directory:

    FROM openjdk:17-jdk-slim
    COPY target/*.jar /usr/app/app.jar
    RUN useradd -m myuser
    USER myuser
    EXPOSE 8080
    CMD java -jar /usr/app/app.jar
  3. Open a terminal to the project directory and use Docker to build a container image for the application. Tag it with version 1.0.0. Note the required period . at the end in the line:

    docker build -t my-app:1.0.0 .
    Note
    Image not found by cluster

    Depending on the Kubernetes cluster you’re using, you may need to publish the image to a local registry or push the image to the cluster. Otherwise, the image will not be found. Refer to your cluster documentation for more information.

    If you’re using kind on a local machine, you need to load the image to the cluster like so:

    kind load docker-image my-app:1.0.0

    In a production environment you can publish the image to a registry that is accessible by the cluster.

  1. Create a deployment manifest for the application:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-v1
    spec:
      replicas: 4
      selector:
        matchLabels:
          app: my-app
          version: 1.0.0
      template:
        metadata:
          labels:
            app: my-app
            version: 1.0.0
        spec:
          containers:
            - name: my-app
              image: my-app:1.0.0
              # Sets the APP_VERSION environment variable for the container which is
              # used during the version update to compare with the new version
              env:
                - name: APP_VERSION
                  value: 1.0.0
              ports:
                - name: http
                  containerPort: 8080
                - name: multicast
                  containerPort: 5701 # (1)
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: my-app-v1
    spec:
      selector:
        app: my-app
        version: 1.0.0
      ports:
        - name: http
          port: 80
          targetPort: http
    1. The multicast port 5701 is only used for session replication using Hazelcast.

    Deploy the manifest to your cluster:

    kubectl apply -f app-v1.yaml
  2. Run the following command to verify that you have 4 pods running:

    kubectl get pods

    You should see output similar to the following:

    NAME                            READY   STATUS    RESTARTS      AGE
    my-app-v1-f87bfcbb4-5qjml       1/1     Running   0             22s
    my-app-v1-f87bfcbb4-czkzr       1/1     Running   0             22s
    my-app-v1-f87bfcbb4-gjqw6       1/1     Running   0             22s
    my-app-v1-f87bfcbb4-rxvjb       1/1     Running   0             22s

6. Ingress Rules

To access the application, you need to provide some ingress rules.

  1. If you don’t already have ingress-nginx installed in your cluster, install it with the following command:

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
  2. Create an ingress rule manifest file like so:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: my-app
      annotations:
        kubernetes.io/ingress.class: "nginx"
        # --- Optional ---
        # If server Push is enabled in the application and uses Websocket for transport,
        # these settings replace the default Websocket connection timeouts in Ngnix.
        nginx.ingress.kubernetes.io/proxy-send-timeout: "86400"
        nginx.ingress.kubernetes.io/proxy-read-timeout: "86400"
        # ---
        nginx.ingress.kubernetes.io/affinity: "cookie"
        nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    spec:
      rules:
        - http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: my-app-v1
                    port:
                      number: 80

    Deploy the manifest to your cluster with the following command:

    kubectl apply -f ingress-v1.yaml
  3. The application should now be available at localhost.

    Note
    Accessing the application from your local machine

    To access the application from your local machine, it may be necessary to use the port-forward utility. In this case use the following command:

    kubectl port-forward -n ingress-nginx service/ingress-nginx-controller 8080:80

    The application should now be available at localhost:8080.

7. Scaling the Application

You can use kubectl commands to increase or reduce the amount of pods used by the deployment. For example, the following command increases the number of pods to 5:

kubectl scale deployment/my-app-v1 --replicas=5

You can also simulate the failure of a specific pod by deleting it by name like so:

kubectl delete pod/<pod-name>

Remember to substitute the name of your application pod. You can see the names of all the pods with the kubectl get pods command.

If you have enabled session replication, this can be used to check that it’s performing as expected. If you open the application and then delete the pod to which it’s connected, you shouldn’t lose session data after the next user interaction.