The default setup for Vaadin web applications has been designed with the same principle as with the actual library: developer productivity. The war file generated by our archetypes or by Eclipse plugin is usable as such for most applications.
But in case you wish to optimize your setup for speed and hosting efficiency, there are a couple of tricks you can do pretty easily. Especially the first of the three presented “tricks”, is something that each Vaadin developer should have in his toolbox.
I: Enable gzip compression
Gzip is a fast algorithm for both web servers and browsers so you shouldn’t be worried about that. The CPU cycles you lose compressing the content might already be won back by the shorter time of active server threads. To minimize the CPU usage, compressing the content can also be outsourced to a possible front proxy, which may even have special hardware for compression, and static files can be precompressed.
You can read more about the topic from the Vaadin forum.
Widgetset, aka the client side engine, is the largest piece that is commonly transferred by Vaadin web app users. The default setup contains quite a lot of “client side components”. The initial piece of JS is nowadays already over 1MB by default. Gzip compression can cut that into about one fourth, but there is more we can do about it.
With a customized version of ConnectorBundleLoader one can control which client side components are compiled into your client side engine. You can also control how they are controlled. For example, if some components are only visible on a rarely used screen, those widgets could be loaded lazily and this way save some bytes in the application start.
In the latest Vaadin versions, there is a handy method in the “debug console”, opened by “?debug” query parameter. Clicking on the “magic wand” in the third tab will list all used connectors and suggest an optimized implementation of the ConnectorBundleLoader class and nice instructions for how to configure your .gwt.xml file to actually use it. No learning of advanced GWT tricks is needed.
The ”Debug console” showing hints for an optimized widgetset
III: Host static resources from a separate server
In addition to the compiled GWT module, Vaadin has themes, consisting of css, images and fonts, and “vaadinBootstrap.js” which are “static resources”. Those are by default in “VAADIN” directory, in your war file or in libraries, and served via VaadinServlet. VaadinServlet is by no means optimized for serving small files and to serve those files you don’t really need an advanced Java server. Thus they can be outsourced to CDN provider or to your own server or servlet that is better optimized to serve static content.
Most modern Java servers have well optimized default handlers for small files and serve them directly from a memory cache. Also, lightweight servers like lighttpd and nginx are popular options for serving static content. If you use them as a front proxy, you could configure them to handle your static files as well.
The stuff that you need into your static file directory are
vaadinBootstrap.js - copy this from vaadin-server.jar
widgetset - the GWT generated files from VAADIN/widgetsets/ or the same stuff from the default widgetset from vaadin-client-compiled.jar
Theme resources - your own custom theme with possible inherited resources from core themes vaadin-themes.jar or resources from a third party theme
Screenshot shows a minimal setup for "CDN" with compiled widgetset, vaadinBootstrap.js and the default “reindeer” theme with its parent theme “base”.
When you have packaged and deployed the fileset to your CDN or own static file servers, the only thing left is to configure the VaadinServlet to refer to your new static file location. This can be controlled with “Resources” web init parameter. The value is the url to your static files - without the VAADIN part. In Servlet 3 style annotation, this can be configured as follows:
value = "/*",
initParams = @WebInitParam(
name = "Resources",
value = "http://your-cdn.provider.com/cdn-example"
With these relatively simple “tricks”, you’ll already gain a quite nice improvement to your app and especially for the end users the enhancements can be noticeable. When the app starts, much less stuff is transferred and the pressure can be distributed to multiple servers that will parallelize the download of the resources.
This should already be a bit more efficient for the application server as well, because the only resources that are left for the application server are the “host page” and UI state responses. We’ll later continue on the subject and focus on scaling the application server’s load horizontally among several server nodes.