Troubleshooting
- Invalid Docker Image
- DNS Propagation Delays
- Deployment and Pod Inspection
- Insufficient Cluster Resources
Due to the breadth of functionality offered by Control Center—including Kubernetes resource management, DNS, SSL, identity integration, and database provisioning—misconfigurations or external infrastructure issues can sometimes cause unexpected behavior. This guide outlines common problems and how to resolve them.
Invalid Docker Image
When an application deployed with Control Center fails to become available, a common cause is an invalid or unreachable Docker image reference. This typically results in pods stuck in ImagePullBackOff
or ErrImagePull
states, and no logs are available because the container never starts.
Symptoms
-
The application status remains in a pending or error state.
-
kubectl
shows warning events for failed image pulls. -
Logs are unavailable or absent due to image not being downloaded.
To diagnose, inspect recent events in the namespace:
Source code
Terminal
kubectl -n vaadin events --types=Warning
A failing deployment typically produces output similar to:
Source code
Warning Failed Pod/app-6bd5d5d6c7-xxxxx Failed to pull image "foo:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for foo, repository does not exist or may require 'docker login'
Warning FailedPull Pod/app-6bd5d5d6c7-xxxxx Error: ErrImagePull
Warning BackOff Pod/app-6bd5d5d6c7-xxxxx Back-off pulling image "foo:latest"
This indicates that Kubernetes is unable to locate or access the referenced image.
Understanding Docker Image References
A valid image reference follows this structure:
[registry[:port]/]namespace/repository[:tag]
Examples:
-
company/my-app
– default Docker Hub reference (with implied registrydocker.io
and taglatest
) -
ghcr.io/company/my-app:1.2.3
– GitHub Container Registry -
172.31.0.15:8081/company/my-app:2024.04.1
– private registry
If the registry is private, authentication is required using an image pull secret.
[Example] Public Image from Docker Hub
For public images hosted on Docker Hub (or other public registries), no credentials are needed. The following App
manifest is sufficient:
Source code
public-app.yaml
public-app.yaml
apiVersion: vaadin.com/v1alpha1
kind: App
metadata:
name: public-app
spec:
host: public.example.com
image: company/my-app:1.0.0
version: 1.0.0
replicas: 2
This pulls the image company/my-app:1.0.0
from the default docker.io
registry.
[Example] Private Image with Pull Secret
For private registries, an image pull secret must be defined and referenced in the App
manifest:
-
First, create a Kubernetes secret (if not already defined):
Source code
Terminalkubectl create secret docker-registry my-private-registry \ --docker-server=172.31.0.15:8081 \ --docker-username=<username> \ --docker-password=<password> \ -n vaadin
-
Reference the secret in the
App
manifest:Source code
private-app.yaml
apiVersion: vaadin.com/v1alpha1 kind: App metadata: name: private-app namespace: vaadin spec: host: private.example.com image: 172.31.0.15:8081/company/my-app:2024.04.1 version: 2024.04.1 replicas: 2 imagePullSecrets: - name: my-private-registry
This ensures that the Kubernetes nodes can authenticate with the private registry to pull the image.
Additional Tips
-
Verify the image exists and can be pulled using the Docker CLI:
Source code
Terminaldocker pull company/my-app:1.0.0
-
Double-check spelling, tags, and casing — image names and tags are case-sensitive.
-
If the image tag is omitted, Kubernetes defaults to
latest
.
Important
|
Failing to configure access to private registries or referencing non-existent images prevents application pods from starting and may block rollout entirely. Always confirm that images are valid, accessible, and authenticated when necessary. |
DNS Propagation Delays
If a domain used by Control Center or a deployed application fails to resolve or load in the browser, the DNS record may not have fully propagated.
DNS propagation is the time it takes for changes to domain records (e.g., A
or CNAME
) to become visible across DNS resolvers globally. Until this process is completed, domains may resolve incorrectly or not at all.
Symptoms
-
Browser shows a DNS resolution error or timeout
-
HTTPS fails due to a missing certificate
-
ping
ornslookup
fails for the domain
To inspect DNS resolution:
Source code
Terminal
nslookup control.example.com
Expected output:
Source code
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: control.example.com
Address: 203.0.113.42
Alternatively, use ping
to test resolution:
Source code
Terminal
ping -c 4 control.example.com
If the name does not resolve, it means the change has not yet propagated or is misconfigured.
Solutions
-
Wait for propagation to complete — this may take several minutes up to 24 hours depending on Time-To-Live (TTL) settings.
-
Check the DNS configuration in your cloud provider’s dashboard.
-
Confirm that DNS records point to the public IP address of the cluster ingress.
In local environments, domains like *.local.gd
can be used, or the hostname can be mapped manually in the local hosts
file.
For example:
-
On Linux/macOS:
/etc/hosts
-
On Windows:
C:\Windows\System32\drivers\etc\hosts
Example line:
Source code
127.0.0.1 control.local.gd
Certificate Troubleshooting
If TLS certificates are not issued and DNS is resolving correctly, inspect the status of certificate requests:
Source code
Terminal
kubectl -n vaadin get certificaterequests
Expected output:
Source code
NAME READY AGE
control-center-cert-xyz True 2m
app-foo-cert-abc True 1m
If certificate requests are not marked True
, the cert-manager may still be waiting for DNS propagation before issuing certificates.
Deployment and Pod Inspection
A 503 Service Temporarily Unavailable
error often means the service is not ready to receive traffic. This may occur if the application pod has not started yet or health probes are failing.
Symptoms
-
Browser displays a 503 error
-
Application pod remains in
Pending
,CrashLoopBackOff
, orRunning
without readiness -
Liveness or readiness probes fail
To inspect the pod:
Source code
Terminal
kubectl -n vaadin get pods
Check detailed status:
Source code
Terminal
kubectl -n vaadin describe pod <pod-name>
Also review recent events:
Source code
Terminal
kubectl -n vaadin get events
To stream logs from the application deployment:
Source code
Terminal
kubectl -n vaadin logs -f deployment/my-app
If probes are failing due to slow startup, increase the startupProbe
initial delay:
Source code
my-app.yaml
my-app.yaml
apiVersion: vaadin.com/v1alpha1
kind: App
metadata:
name: my-app
spec:
host: my-app.example.com
image: company/my-app:1.0.0
version: 1.0.0
startupProbe:
initialDelaySeconds: 60
Insufficient Cluster Resources
If Control Center or a deployed application performs poorly or becomes unresponsive, the Kubernetes cluster may be under-resourced.
This typically occurs when CPU or memory requests exceed available capacity on the node. See Resource Management for Pods and Containers for more information.
Symptoms
-
Pods remain in
Pending
state -
Frequent
OOMKilled (Out Of Memory)
container restarts -
Node events show memory or disk pressure
Inspect node status:
Source code
Terminal
kubectl describe node
Look for entries like:
Source code
Conditions:
MemoryPressure: True
DiskPressure: False
PIDPressure: False
Allocatable:
cpu: 4
memory: 8Gi
Solutions
-
Scale up node capacity through your cloud provider or local cluster tool.
-
Ensure other workloads are not over-consuming resources in the same namespace.
To explicitly limit Control Center’s resource requests:
Source code
Terminal
helm upgrade control-center oci://docker.io/vaadin/control-center \
-n vaadin \
--reuse-values \
--set global.resources.limits.cpu=2 \
--set global.resources.limits.memory=512Mi
This ensures Control Center workloads do not exceed the specified limits for CPU and memory.
Limiting Resources for Deployed Applications
When deploying applications using the App
resource, resource requests and limits can be set via spec.resources
. This controls how much CPU and memory the application can request and use.
Source code
reporting-app.yaml
reporting-app.yaml
apiVersion: vaadin.com/v1alpha1
kind: App
metadata:
name: reporting-app
spec:
host: reports.example.com
image: company/reports-ui:1.0
version: 1.0
resources:
requests:
cpu: 500m
memory: 256Mi
limits:
cpu: 1000m
memory: 512Mi
This configuration:
-
Requests 0.5 vCPU and 256 MiB of memory per pod (for scheduler placement)
-
Enforces a hard limit of 1 vCPU and 512 MiB of memory to prevent overconsumption
Use resource settings to protect the stability of your cluster, especially when running multiple applications in shared environments.