Atmosphere: Connection remotely closed

Since we enabled Push in our application, we get tons of exceptions of the following type:

2015-08-10 14:32:06,934 ERROR [Atmosphere-Shared-AsyncOp-14341]
com.vaadin.server.communication.PushAtmosphereHandler Exception in push connection java.io.IOException: Connection remotely closed for c84be8fd-2168-4759-990f-b36bbc4f2712 at org.atmosphere.websocket.WebSocket.write(WebSocket.java:229) at org.atmosphere.websocket.WebSocket.write(WebSocket.java:219) at org.atmosphere.websocket.WebSocket.write(WebSocket.java:47) at org.atmosphere.cpr.AtmosphereResponse$2.write(AtmosphereResponse.java:552) at org.atmosphere.handler.AbstractReflectorAtmosphereHandler.onStateChange(AbstractReflectorAtmosphereHandler.java:148) at com.vaadin.server.communication.PushAtmosphereHandler.onStateChange(PushAtmosphereHandler.java:51) at org.atmosphere.cpr.DefaultBroadcaster.invokeOnStateChange(DefaultBroadcaster.java:1074) at org.atmosphere.cpr.DefaultBroadcaster.prepareInvokeOnStateChange(DefaultBroadcaster.java:1094) at org.atmosphere.cpr.DefaultBroadcaster.executeAsyncWrite(DefaultBroadcaster.java:899) at org.atmosphere.cpr.DefaultBroadcaster$3.run(DefaultBroadcaster.java:520) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745),
Exception in push connection

We also have performance issues since push is enabled. For some users the app often results in non-answering UIs and endless loading indicators.
Is it a known issue? What could be the reason for the given Exception and can it result in a not responding application??

Can a proxy on the client side be responsible for the hanging UIs, when Push is activated?

Our setup is:
Vaadin 7.5.0, Automatic push mode
Jetty 9
HAProxy

Hi, this is presumably caused by the proxy. You should configure it to never timeout in case of websockets, and possibly do other configuration as well such as disabling buffering.

Thank you for advice, Johannes.
In the last few days we tried to reconfigure the HAproxy to fix this issue, and added some meaningful options. Still no luck, we still get this exception. Currently our haproxy config looks like this (relevant fragment):

[code]
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
#for jetty, tomcat…
option http-pretend-keepalive

retries 3
option redispatch
maxconn 4000
timeout connect 5s
timeout client 600s
timeout server 600s
timeout client-fin 60s
timeout tunnel 3600s
[/code]Didn’t find any meaningful buffering options for haproxy…
I hope anyone can help and give some additional pointers)
Thanks