But recent comments by Leif Astrand in
#13653 indicate a Vaadin change to pre-compression may affect the optimum choice of what gets compressed by the servlet:
There are some gzip servlet filter implementations out there that do not play nicely with
this change since they don't check for the Content-Encoding header in the response before
compressing the output, causing the response to be compressed twice.
Is there a best practices approach to what gets compressed, what doesn’t and what minimum size is likely to be near optimal?
I have no idea to which naive gzip filters Leif is referring to, but I’d be really surprised if tomcats connector level gzip feature would be so stupid that it would try to gzip already gzipped content. Your configuraiton looks just perfect to me. Even with the upcoming change it is still relevant to keep this on. The change just uses precompressed versions of static resources (e.g. the GWT compiled widgetset) if available. You’ll in many applications get a nice boost by compressing dynamic “state responses” as well.
The problematic gzip filter Leif had faced was the one from EHCache-web package. I have sometimes tipped people to use it so I just had to
write about workarounds .
Thanks for the tip on enabling of the compression on Tomcat. Just a note: according to
https://tomcat.apache.org/tomcat-7.0-doc/config/http.html the compression may be disabled for files larger than 48k. To enable compression for larger files, set useSendfile to false.