Because there are few things more exciting than deploying an AI app to your own cloud, today we'll be deploying Alejandre Duarte's Vaadin Chatbot demo app to a local Kubernetes cluster. It is fairly straightforward.
If you haven’t already done so, read the blog on how to build the Chatbot app here.
In this article, we first checkout, build and run the application to confirm it works locally. Then, we create a Docker image and deploy it to our local Docker environment. Again, this is to confirm that the application still works as expected. We then push the image to a private image registry and tell Kubernetes the proper credentials: this allows Kubernetes to deploy and run our app.
Let's get started.
Building the application
We start by cloning the Chatbot app from GitHub and switching to the
advanced branch, which contains the version we would like to use:
git clone https://github.com/alejandro-du/vaadin-ai-chat/
git checkout advanced
To ensure that everything works, we first build the app locally:
java -jar target/ai-chat.war
Thanks to Spring Boot magic, the Maven
package command builds the application as a WAR file, after which it is started as a Java JAR application from the
target directory. Check it out at http://localhost:8080 and ask Alice a few questions. BTW, you can get a nice grasp of what Alice is capable of by taking a look in
Creating a Docker image
Now that we know that this works locally, we create a container image that contains the application so we can easily deploy it to other environments. We need a Dockerfile to create a container image. This is where a feature called 'Docker configuration' that was recently added to the Vaadin Starter comes in handy. We won't use the Starter App that is generated this time, but we will extract the
Dockerfile from the ZIP file that is conveniently downloaded after configuring the application.
So, open https://start.vaadin.com/, select 'Docker configuration' in the bottom left corner and click Download App (blue button) at the top. Open the ZIP file and copy the
Dockerfile to the root of the
vaadin-ai-chat project folder.
Take a moment to look at the
Dockerfile to see what it contains. At this moment, note that you will need to replace
.war in lines 13 and 17 to ensure it will work with the Chatbot demo app, which generates a WAR file. It is interesting to note that the
Dockerfile consists of two parts (or multi stages). In the first part, the Chatbot application is built using Java 11, Maven and NodeJS, and in the second part, the resulting WAR file is actually run. You can see that the WAR file is copied from the first stage into the second stage to
/usr/app/app.war. Note that this means that the original source code is no longer available in the image that is eventually run.
A Docker image based on the
Dockerfile is created as follows:
docker build --tag vaadin-ai-chat:advanced . (Don’t forget the period at the end.) Note that it is now tagged with its name
vaadin-ai-chat and version indicator
Deploying to Docker locally
Next, as an intermediate step, it is good to check that the image can be correctly deployed to your local Docker environment, by telling Docker to run it:
docker run --publish 8080:8080 --detach --name vaadin-chat-advanced vaadin-ai-chat:advanced
This should give you a working application, again on port 8080, but this time served from a running Docker container. It is reassuring to know we're on the right track.
We can confirm that the application image no longer contains the source code by taking a quick peek inside the running container. You can do this by looking up the process name with:
docker ps (you will learn the
<container name>) and by creating a shell into the running container with:
docker exec -it <container name> /bin/bash. Have a look at the container's directory structure and confirm that the Java application that is run is indeed found in
After we've established that the application (and thus also the image) works correctly, we can remove the running container with:
docker rm --force vaadin-chat-advanced.
Pushing the image to a repository
Compared to Docker, Kubernetes works a bit differently, which means that we cannot just push our Docker image to the Kubernetes cluster. We need to first push the image to a registry, from which it will be pulled by our Kubernetes cluster. A registry is an online location where images are stored and from where they are managed. We tag the image we created to ensure that is has a name and version number that uniquely identifies it.
docker tag vaadin-ai-chat:advanced <docker user_name>/vaadin-ai-chat:advanced
<docker user_name> with your own Docker username. If you don't have one, you can easily create one.
With this tag in place, we can now push the image to a registry. We could push to a local registry or one that is run within your own organisation, but in this case we use Docker Hub, Docker's online registry. Docker Hub allows you to have one private repository for free, which is nice to experiment with. We use a private repository, which means that we need to share our Docker credentials with Kubernetes: we'll look at that in a moment. We first push the image we created to the registry with:
docker push <docker user_name>/vaadin-ai-chat:advanced
To check all is as expected, log in to Docker Hub in a browser and confirm that the image is available. If necessary, modify the repository's visibility settings to
Next, we need to create a secret, which Kubernetes can use to authenticate and download the image we just pushed from the registry.
We create a secret called
regcred as follows:
kubectl create secret docker-registry regcred
(Replace the values between <> with your own Docker credentials.)
Once you have created the secret, it is good to inspect the result with:
kubectl get secret regcred --output=yaml
Kubernetes has an extensive command line utility called
kubectl (pronounced as Cube Control by some), which has a list of options that can be slightly intimidating at first. Before deploying to Kubernetes, we [install the Web UI Dashboard] (https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/), which gives an extensive visual perspective on what is happening on our cluster.
Run the following from the command line:
kubectl apply -f
Before we can login to our new Admin dashboard, we need to create a login and bind this to the admin dashboard. We do this by running:
kubectl create serviceaccount dashboard-admin-sa
kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
We confirm that this worked by retrieving the secret:
kubectl get secrets. This gives the token we need to login to the UI Dashboard. Copy the token and be ready to paste it when opening the Web UI. You will log in to an impressive UI dashboard that uncovers many Kubernetes details at a glance.
As a last remark to this section: Lens (an application that aims to be a ‘Kubernetes IDE) is an interesting alternative to view what is happening in a Kubernetes cluster. It’s worth a look…!
Deploying to Kubernetes
Before we can deploy our application, we need to define it in a Kubernetes configuration file, which uses YAML format. Save the following in the root of the
vaadin-ai-chat project with filename
- name: vaadin-chat
image: docker.io/<docker user_name>/vaadin-ai-chat:advanced
- name: regcred
- port: 8080
Don’t forget to change the
<docker user_name> on line 18 to your own Docker username.
There is extensive documentation on the Kubernetes' configuration file format available online (e.g. here or here), so we will not explain its full contents in this article, except to point out the reference to the image (
docker.io/<docker user_name>/vaadin-ai-chat:advanced) and the secret we created earlier (
imagePullSecrets: - name: regcred). At the end of the config file, you can see that the application that runs in the pod on port 8080 is actually exposed via port 30001.
Next, we install the Chatbot application, by asking
kubectl to apply the configuration, which means to deploy our application:
kubectl apply -f vaadin-ai-chat.yml.
If you open your browser at http://localhost:30001, you will see the Chat app in action for the third time, this time running on your local Kubernetes cluster. Congratulations...!
At this stage, we should point out that Vaadin applications are stateful and require sticky sessions. This means that our current example is too simplistic to be used for a full production setup. (An explanation of how to configure a full stateful Kubernetes configuration is beyond the scope of this article.)
You can remove (undeploy) the application again with the following command:
kubectl delete -f vaadin-ai-chat.yml.
All in all, quite straightforward.
Thanks for your attention...!
This article is based on the following technologies:
- macOS Catalina (10.15.5)
- Java 11+
- Maven 3.6.3
- kubernetes-cli stable 1.18.5 (bottled) via Homebrew
- Docker Desktop Community ed. version 22.214.171.124