The obligatory scalability question - is vaadin overweight on HttpSession?

Scanning the ‘book on Vaadin’ and browsing through some of the source code, I’ve learned that vaadin stores each client/browser occurrence of an application as a HttpSession object. My question is: how does this scale?

To me this looks pretty hefty on resource usage on the JVM. Obviously tucking away the complete state of an application has some serious benefits in terms of ease of development - but I wonder at which cost. I know that one of the guiding philosophies of Vaadin is server side logic because it is more secure and less prone to security attacks - but sharing the resource consumption over client and server also has some benefits, right?

Are there any real performance metrics out there, that shows me whether using Vaadin is up-to-task for anything beyond, let’s say 500 (concurrent) users?

Thank you,
-J.

Yes - HttpSessions in Vaadin are quite fat. The real questions are how fat and does it matter?

A trivial application - say Calculator example app - consume maybe 10kB of server memory per user. A fairly complex application (from UI perspective) - say Sampler - consume maybe 200kB of server memory per user. To put this into perspect, lets say we have server with 8GB memory, of which 7GB is reserved for JVM heap. This implies that we can host over 35000 concurrent users (sessions) per server running complex UI (sampler in this case). If you really have tens of thousands concurrent users, you might want to a) use more memory, b) serialize less active sessions to disk or c) use multiple servers.

So I would argue that server memory usage is rarely an issue.

I have not seen public benchmarks, but have seen production systems that handle considerably more concurrent users just fine with a single server. Creating a good benchmark might be impossible as the memory footprint is application specific. Thus a synthetic benchmark might give a false picture.

Is there any good way to benchmark a http session memory usage on a production server ?

We have from 500 to 1000 concurrent users daily on one of our production servers, so we can measure it somehow. But we cannot install a profiler there, so it should be profiler-less way to meausure :wink:

Maybe you could serialize some of the current sessions (selected by random) to “/dev/null”, calculate mean size of the serialized sessions and multiply it by the number of sessions. This is far from getting an accurate number but would give a ballpark estimate without disturbing the live system.

Another option would be to just measure heap size (after garbage collection).

One more thing… For debugging and profiling a live system btrace could be a useful tool:


http://kenai.com/projects/btrace/

I have not tested it myself (other than one Sun Lab course at the last JavaOne), but it looks really interesting.

Interesting, will give it a try , thanks !