scalability blog


Just back from vacation and was expecting to see a discussion around the scalability blog from Joonas:

but I don’t see one.

I thought that the presentation was interesting enough to warrant a discussion, did I miss it somewhere?

I do have a few questions:

  1. The 3000 simulated users were really to support the 2593 transactions/minute based on the example application?

  2. The cpu usage was related to cpu load based on the requests made from the GUI as well as the SQL calls correct? Should the sql calls cpu usage be ignored as its so minor, thus the cpu load is really the load from the gui?

  3. Is it safe to draw general conclusions like ‘for a moderately complex GUI, 3000 is a good ballpark to assume (for initial deployment purposes) based on the typical load the GUI will make on the server’?

My sincere thanks to Joonas to publish the results for discussion!


Unfortunately we had still some tests to run, but other busier projects took over the team and the scalability testing project is still somewhat unfinished. Will get back to the scalability testing as soon as possible.

With the Apache JMeter test we were able to run 3000 parallel threads each sending Ajax request to a single Tomcat server as fast as possible. Measured average latency per request was 150ms. This resulted to average 1912 sales transactions per minute. Unfortunately we were limited by the CPU load in Apache JMeter server and thus were not able to fully utilize UI server. UI server CPU load was about 70%. It might be possible that a single server with Tomcat could be able to handle that target of 2593 sales transactions / minute.

But there are couple of potential issues not taken into account:

  • In the above test everyone buys a ticket. In real world many people just visit the store and end up not buying anything.
  • In the above test we turned off the backend (business logic). This probably would be a bottleneck. Also with http protocol level simulation - such as Apache JMeter - it is hard to simulate random selections. Thus we ended up selling one seat over and over again. It might be that all data related to that seat end up in CPU cache and real world scenario of selling random seats would be slower.
  • Running 3000 parallel threads only keep up 3000 HttpSessions in server memory. In real life that 1912 sales transactions for 3000 concurrent users is totally unrealistic. In real world scenario I would expect the users to use maybe 5 minutes to complete the purchase transaction. Of course, this 5x multiplier should also be taken into account with server memory usage… On the other hand - memory requirements of one session was measured to be about 188kb. Thus even with 3000 * 5 concurrent sessions memory requirements (for HttpSessions) would be only 3GB.

All the figures are only for UI part. We turned off the backend for the tests. The actual reason for this was that we had still some problems with the distributed cache parameters. On the other hand - now the numbers more accurately reflect the Vaadin UI layer performance.

My conclusions would be:

  • UI layer CPU or network usage is rarely a scalability bottleneck.
  • If one pays attention to data stored in the session, it is possible serve tens of thousands concurrent users per server.

Also it should be membered in all discussions if we are talking about “ADHD disorder” users clicking as fast as possible (that 3000 in this case) or realistic users (15000 in this case).

Hi Joonas,

Thank you very much for the reply, clears up a lot. Many of the details I wasn’t able to glean from the slides even after looking at them a few times.

I do look forward to seeing the final results when you finish off the remaining tests.

The conclusion looks outstanding for Vaadin - I was always under the impression that there was a greater server load with vaadin then say, gwt, but even if there is, it doesn’t seem (from these initial results) significant. As well, the memory requirements don’t seem onerous either.

Thanks again for the insight.


Final results, benchmark source code and distributed load testing environment are finally released.

Overview and results

Step-by-step instructions on how the results were measured to make validation easy and give a basis for benchmarking your own applications

Source code for benchmark application