What is AI tool calling?
Spring AI tool calling enables Large Language Models (LLMs) like ChatGPT to directly invoke methods in your Java application. This creates a powerful bridge between natural language inputs and your business logic without complex prompt engineering.
Key components
The example application demonstrates a simple chat interface where users can interact with an AI assistant that has access to database operations:
@Service
public class PersonService {
// Repository dependency omitted for brevity
@Tool(description = "Retrieves all persons from the database")
public List<Person> findAll() {
return repository.findAll();
}
@Tool(description = "Saves a person entity to the database")
public Person save(
@ToolParam(description = "The person entity to be saved") Person entity
) {
return repository.save(entity);
}
}
The @Tool
annotation exposes methods to the AI, while @ToolParam
provides context about the parameters.
Creating the chat interface
The Vaadin view integrates these annotated services with a chat interface:
public PeopleView(PersonService personService, ChatClient.Builder builder) {
var chatClient = builder
.defaultTools(personService)
.build();
// UI components setup omitted
messageInput.addSubmitListener(event -> {
var userMessage = event.getValue();
messageList.add(new MarkdownMessage(userMessage, "You"));
var assistantMessage = new MarkdownMessage("", "Assistant");
messageList.add(assistantMessage);
chatClient.prompt()
.user(event.getValue())
.stream()
.content()
.subscribe(assistantMessage::appendMarkdownAsync);
});
}
This code creates the connection between the user input, the AI processing, and the rendered response.
How it works
When a user sends a message like "Show me all people in the database," the following happens:
- The message is sent to the LLM with metadata about available tools
- The LLM determines it needs to call the findAll() method
- Spring AI executes the Java method and returns results to the LLM
- The LLM generates a human-friendly response incorporating the data
- The response streams back to the Vaadin UI
All of this happens seamlessly with minimal configuration.
Benefits
- No prompt engineering needed: The AI understands your domain models through annotations
- Type safety: Method calls remain fully type-safe
- Security: You explicitly control which methods are exposed
- Developer experience: Write regular Java code, not complex prompts
Try it yourself
The full source code is available at https://github.com/marcushellberg/vaadin-spring-ai-tools.
The repository includes everything you need to get started, including a working Vaadin UI and preconfigured Spring AI integration.
This application pattern works particularly well for internal tools, customer service applications, and any scenario where you want to combine the power of modern LLMs with your existing Java backend.
For a deeper dive, check out our tutorial on building a minimalist Java application with Vaadin and Spring AI.