Tools like Spring AI make it easy to call LLMs from Java. However, interacting with LLMs has created a new challenge when building UIs: How do we stream the generated Markdown content to the UI and render it efficiently?
In this blog post, I'll show you how you can display live-updating markdown content in your UI, both in React and Java-based views. I'll only include the most relevant code snippets in the post. You can find a link to the full project source on GitHub at the end of the post.
Calling an LLM to generate a token stream
Before we can display the streaming content, we need to generate it. In my project, I created a service that uses Spring AI to call OpenAI and returns a Flux<String>
. I annotated it with @BrowserCallable
so we can use it both from Flow and Hilla.
@Service
@BrowserCallable
@AnonymousAllowed
public class AiService {
private final ChatClient chatClient;
public AiService(ChatClient.Builder builder) {
chatClient = builder.build();
}
public Flux<String> getResponse(String prompt) {
return chatClient.prompt()
.user(prompt)
.stream()
.content();
}
}
Displaying streaming markdown in React
For displaying Markdown in React, we'll use the react-markdown
library. Install it with npm:
npm -i react-markdown
We can then display the streaming content by appending the generated tokens into a signal and passing the signal value to the Markdown
component:
export default function HillaStreaming() {
const prompt = useSignal<string>('')
const content = useSignal<string>('')
function getResponse() {
AiService.getResponse(prompt.value).onNext(token => {
content.value += token
});
}
return (
<div className="p-m flex flex-col gap-m">
<div className="flex gap-m">
<TextField className="flex-grow" value={prompt.value} onChange={e => prompt.value = e.target.value} />
<Button theme="primary" onClick={getResponse}>Submit</Button>
</div>
<Markdown>{content.value}</Markdown>
</div>
);
}
Displaying streaming markdown in Java
Vaadin Flow UI code runs on the server. In order to keep the communication as efficient as possible, we want to avoid parsing the Markdown on the server for each token and re-sending the entire contents to the client several times a second.
Instead, we'll use the same React Markdown
component as we did earlier and create a Java API for it that allows us to only send over the new tokens as they are generated.
We'll begin by defining the server part of the component:
@NpmPackage(value = "react-markdown", version = "9.0.1")
@JsModule("./flow/markdown-component.tsx")
@Tag("markdown-component")
public class Markdown extends ReactAdapterComponent {
private String markdown = "";
public Markdown() {
}
public Markdown(String markdown) {
setMarkdown(markdown);
}
public void setMarkdown(String markdown) {
this.markdown = markdown;
getElement().executeJs("this.markdown = $0", markdown);
}
public void appendMarkdown(String additionalMarkdown) {
this.markdown += additionalMarkdown;
getElement().executeJs("this.markdown += $0", additionalMarkdown);
}
public void clear() {
setMarkdown("");
}
public String getMarkdown() {
return markdown;
}
}
The component supports setting full markdown content through the setMarkdown
method, or appending content to existing markdown with the appendMarkdown
method.
Note that we defined the npm dependency to react-markdown
in the component as well. This way, Vaadin will ensure it's installed for anyone trying to use the component.
Next, we define the client-side counterpart of the component as a web component:
class MarkdownElement extends ReactAdapterElement {
protected override render(hooks: RenderHooks) {
const [markdown] = hooks.useState('markdown', '')
return <Markdown>{markdown}</Markdown>;
}
}
customElements.define('markdown-component', MarkdownElement);
Here, we define a markdown
state variable and bind it to the Markdown
component to display it. The state value gets updated by the executeJs
functions on the server.
Our custom Markdown
component is now ready to use. We call the AiService
and consume the reactive stream it produces.
public class JavaStreaming extends VerticalLayout {
public JavaStreaming(AiService ai) {
var ui = UI.getCurrent();
var promptInput = new TextField();
var submitButton = new Button("Submit");
submitButton.addThemeVariants(ButtonVariant.LUMO_PRIMARY);
var outputMarkdown = new Markdown();
submitButton.addClickListener(e -> {
outputMarkdown.clear();
ai.getResponse(promptInput.getValue())
.subscribe(ui.accessLater(outputMarkdown::appendMarkdown, null));
});
var form = new HorizontalLayout(promptInput, submitButton);
form.setWidthFull();
form.expand(promptInput);
add(form, outputMarkdown);
}
}
Finally, we need to add a @Push
annotation on the Application
class to enable the WebSocket-based push connection.
@Push
@SpringBootApplication
@Theme(value = "streaming-responses")
public class Application implements AppShellConfigurator {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Source code
You can find the complete source code for the example on GitHub.
Displaying live-updating Markdown in your Java or React UI is simple with tools like Spring AI and Vaadin. Whether you’re using React or Vaadin Flow, these solutions help you efficiently handle streaming content for modern, dynamic applications.
New to Vaadin? Get started with your first project hands-on at start.vaadin.com.