Docs

Documentation versions (currently viewingVaadin 24)
Documentation translations (currently viewingEnglish)

Technical Setup of your Environment for AI

Configure API keys and IDE settings so your Vaadin application can connect to hosted or local LLMs.

Before you can connect your Vaadin application to a Large Language Model (LLM), you need to configure your development environment. Depending on which Java framework you use, refer to their documentation for detailed instructions on how to provide an API key or connect to a local model:

Local vs. Hosted LLMs (AI Large Language Models)

You can integrate either a hosted (cloud-based, commercial) LLM or a local (open-source, on-premise) LLM. Both allow you to connect your Java application to an AI model, but they differ in setup, cost, performance, and data privacy.

Hosted / Commercial LLMs

Hosted providers such as OpenAI or Anthropic run their models in large data centers and expose them via an API. To authenticate requests, you configure an API key in your project or IDE.

Key characteristics:

  • Easy to integrate - no local installation or GPU required.

  • Access to state-of-the-art models - usually the latest, largest LLMs.

  • Usage-based pricing - costs depend on tokens or requests.

  • Data leaves your system - prompts and responses are processed by the provider.

Note
See API Key for your IDE for details on how to configure API keys in your local environment.

Local / Open-Source LLMs

Alternatively, you can download and run open-source LLMs locally. Your Vaadin application communicates with them through a local API endpoint such as http://localhost:1234.

This gives you more control over privacy and performance but requires sufficient compute resources.

Key characteristics:

  • Full data privacy - all prompts and responses stay on your machine.

  • No usage fees - once downloaded, local models are free to run.

  • Hardware-dependent - speed and quality depend on your CPU/GPU and RAM.

  • Model sizes vary - choose smaller models for faster responses or larger ones for higher accuracy.

Popular tools:

  • Ollama - run many LLMs locally with a simple CLI and API.

  • OpenLM Studio - manage and run models locally with a graphical UI.

Quick Comparison

Aspect Hosted / Commercial LLMs Local / Open-Source LLMs

Setup

Simple — only configure an API key

Requires download, installation, and hardware setup

Performance

High — runs on large GPU clusters

Depends on your local CPU/GPU and RAM

Costs

Usage-based pricing (per token or request)

Free to run after download, but hardware costs apply

Privacy

Data leaves your system, processed by provider

All data stays local on your machine

Tip
Local LLMs are great for development, prototyping, or offline use. Hosted LLMs are easier to scale in production and often provide better accuracy.