You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
migrated from Bugzilla #487197
status ASSIGNED severity major in component websocket for 9.3.x
Reported in version 9.3.7 on platform PC
Assigned to: Joakim Erdfelt
On 2016-02-04 04:44:22 -0500, Rafael Campos wrote:
Created attachment 259559
Heap usage during the leak
It seems that there is a non-heap memory leak when connection to Websocket is established using the permessage-deflate extension. I've tried connecting and disconnecting 1000 websockets again and again, without sending or receiving anything, and the RSS size of the process grows from 3.8G (I'm using -Xmx3072m -Xms3072m -XX:+UseConcMarkSweepGC) up to all the memory available in the system (I've seen up to 7.3G), but the heap remains at around 1G or 1.5G (see attached file).
If I repeat the same test without enabling the compression in the client (not sending the permessage-deflate extension header), there is no such leak and the memory remains stable.
With compression enabled, each bunch of 1k sockets will consume around 70/80M extra of memory that is not freed after they are disconnected, even forcing GC with JConsole or calling to the method System.runFinalization(); to ensure thay the finalize() methods of the deflaters/inflaters are called.
I think the offending lines are in the CompressExtension class, as if I replace the following lines in the constructor:
deflater = new Deflater(Deflater.DEFAULT_COMPRESSION,NOWRAP);
inflater = new Inflater(NOWRAP);
By:
deflater = null;
inflater = null;
There is no such leak in the connection but obviously I can't send/receiving anything as I will only get NPEs.
On 2016-02-04 12:20:48 -0500, Joakim Erdfelt wrote:
Been analyzing this via jmap, dumped hprofs, and Eclipse MemoryAnalyzer.
The WebSocket session, and extensions are being cleaned up.
However, it seems the Deflater and Inflater instances are getting hung up in their own java.lang.ref.Finalizer (boggle)?
Going to try to either be explicit with .end() or possibly introduce Deflater and Inflater object pools (grumble)
On 2016-02-04 12:40:12 -0500, Rafael Campos wrote:
Thanks Joakim.
I am working in a workaround with a pool of Deflaters/Inflaters and the initial tests look promising. At least, the RSS memory is not increasing forever. If you want, I can attach a patch tomorrow so you can test and evaluate it.
The only issue that concerns me is that right now I have it "capped" so I create as much X instances. That works for us, as we know the number of users of our servers, but I don´t now if it is reasonable as a general solution.
On 2016-02-04 13:04:18 -0500, Joakim Erdfelt wrote:
On 2016-02-04 14:16:18 -0500, Rafael Campos wrote:
I will test that tomorrow ASAP and will report the results, as right now I have no access to the code.
BTW, I was debugging and it seems that the garbage collector invokes (at some point in time...) the method finalize() which leads to a call to the end() method, but the leak was still there. Maybe at this point the Deflater/Inflater has lost some internal references and that is causing the leak and calling it "manually" instead of waiting for the GC to finalize the object fixes the issue.
On 2016-02-05 06:43:16 -0500, Rafael Campos wrote:
I have been testing this but it doesn't seems to work. The leak is not so obvious now as it doesn't start until we start sending/receiving data (I think due to the inflater/deflater instantiation is now delayed) but again I've been able to raise the RSS size of the JVM up to 5G with just 3G of heap.
It seems that the JVM itself doesn't know about this extra memory as addint the option -XX:NativeMemoryTracking=detail to the JVM and setting a baseline at some point the increment of the memory is not consistent with the increase in the actual size of the JVM in the system.
Also, taking a look to /proc/{pid}/smaps I can see several blocks as the following one:
And taking a look to the memory in that position (gdb --batch --pid {pid} -ex "dump memory test 0x7f1aa0000000 0x7f1aa4000000") I can see the messages that have been sent through websocket with a hexadecimal editor, so I guess this is the memory leaked by the inflaters/deflaters.
The text was updated successfully, but these errors were encountered:
Bug #300 was opened to attempt to address the Deflater / Inflater memory leak issue.
Will keep this one open until such time as that Object Pool has been integrated and proves successful.
Is there any update on this issue? We are also facing massive memory leaks due to this. We disabled websocket compression by unregistering the permessage-deflate extension. It has solved the memory leaks, but it caused slowness in sending data to websocket remote endpoint and resulted in production issues.
migrated from Bugzilla #487197
status ASSIGNED severity major in component websocket for 9.3.x
Reported in version 9.3.7 on platform PC
Assigned to: Joakim Erdfelt
Original attachment names and IDs:
On 2016-02-04 04:44:22 -0500, Rafael Campos wrote:
On 2016-02-04 12:20:48 -0500, Joakim Erdfelt wrote:
On 2016-02-04 12:40:12 -0500, Rafael Campos wrote:
On 2016-02-04 13:04:18 -0500, Joakim Erdfelt wrote:
On 2016-02-04 13:05:56 -0500, Joakim Erdfelt wrote:
On 2016-02-04 13:07:12 -0500, Joakim Erdfelt wrote:
On 2016-02-04 14:16:18 -0500, Rafael Campos wrote:
On 2016-02-05 06:43:16 -0500, Rafael Campos wrote:
The text was updated successfully, but these errors were encountered: