Web Application Scalability - Executive Summary

Matti Tahvonen
Matti Tahvonen
On Sep 24, 2014 1:56:00 PM

I recently had a privilege to put my head together with Arun Gupta from RedHat about web application scalability. We couldn’t come up with any silver bullets for this sometimes scary topic - there isn’t one. There are lots of different kinds of applications and they all have their optimal setups and different things to tackle.


In the recent webinar on the subject, we covered a huge list of various things one should consider when designing scalable applications. Although we most likely forgot a bunch of things, check it out for tips on how to build your scalability study. In case you  want a shorter summary, here are the top three high level tips to build scalable applications.

“Premature optimization is the source of…”

… all evil. All experienced software architects know this golden rule by Donald Knuth. And scalability is highly related to optimization. Most often it is wiser to invest your money into making your app better in other terms than its scalability, until it actually becomes a problem for you.

Java and JVM are super fast, Moore’s law is stubbornly kept valid, memory is almost free and even HD videos streamed on the internet haven’t risen the traffic prices to the clouds. Most applications have a relatively small amount of concurrent users and they scale well enough without any effort at all. Get a decent server or a PaaS solution and your application most likely scales just fine.

Relying on a proven architecture is a safe bet

The previous tip is especially valid, if you are basing your application on a proven architecture. The Java EE stack, and several execution environments like Wildfly, are proven in huge battles. By using commonly used approaches you are very unlikely to face “surprises” that you cannot tackle.

Vaadin is pretty much a replacement for JSF standard at the UI layer when considering scalability, and also proven to work under huge loads and in a clustered environment. So it is a cheap way to spice your UI with a more interactive alternative to gain better user experience and developer productivity.

Test it

However, new things are of course not that well proven and for some applications the standard Java EE stack with a standardized architecture with EJBs and JPA backend just isn’t suitable. Also, if you are expecting a vast number of users for your application, it is always a good idea to verify your product’s scalability. Note that this is by no means in contradiction with the first tip. If everything goes well, you’ll just sleep better before your launch, but you will also get essential facts about your application. Or one fact to be more exact: the first bottleneck. Testing and profiling is the only way to be sure to improve your application’s scalability from the correct place.

Scalability may have been proven by developers’ “unit tests” and some calculations, but with scalability, things can fail in surprisingly many places. This is why your load testing setup should be done in as realistic an environment as possible. Use the same kind of server cluster setup, same kind of load balancer and generate the load via realistic network setup.

In our exercise project with WebSockets, a technology that should in theory improve scalability, we didn’t really hit the limits of Vaadin or Wildfly at all, but server limitations with default configurations and operating system constraints. Because with WebSockets there is a real TCP connection constantly open for each concurrent user, the first bottleneck was the load balancer that actually needs two connections for each user. The default settings for HAProxy are good for normal http requests, but with constantly open WebSocket connections we initially failed with just a couple of hundred users.

The next bottleneck was the maximum number of file descriptors, again caused by the vastly increased TCP connections compared to common HTTP based setups. We ran out of them in both the load balancer and in the server cluster. In the test setup that was using Mac OS X as server OS, we just couldn’t configure it above 10k. A reasonable number already, but our CPU was still almost idling during normal usage. Using a top notch real server OS with proper configuration would have been the solution for this bottleneck.

As load testing is such an essential and challenging topic, we are working on a separate article focusing on it. There are multiple tools to drive large amounts of simulated users, but we’ll be covering a tool called Gatling that we found very helpful in our exercise. It scales well and has a nice support for testing WebSocket based web applications, like modern Vaadin apps running in modern application servers like Wildfly.

Matti Tahvonen
Matti Tahvonen
Matti Tahvonen has a long history in Vaadin R&D: developing the core framework from the dark ages of pure JS client side to the GWT era and creating number of official and unofficial Vaadin add-ons. His current responsibility is to keep you up to date with latest and greatest Vaadin related technologies. You can follow him on Twitter – @MattiTahvonen
Other posts by Matti Tahvonen