Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking for directions to track/fix memory leak 8.1 #689

Closed
glock18 opened this issue Jun 21, 2017 · 2 comments
Closed

Looking for directions to track/fix memory leak 8.1 #689

glock18 opened this issue Jun 21, 2017 · 2 comments

Comments

@glock18
Copy link

glock18 commented Jun 21, 2017

Hey guys!

It's more of a question than an issue really. I'm looking for an advice of someone more experienced than me (there are plenty of those I am sure!). Allow me tell you a slightly longer version of my story as I hope a little detail related to node 0.12 issue I've been having might be of help to someone badass. Someone badass in a mood to spend time reading this 😄 For a slightly shorter version please scroll right down to 2nd Node.js version block.

  • Node.js Version: 0.12.6, 0.12.18
  • OS: Ubuntu 14.04.2 LTS
  • Scope: runtime
  • Module: might be relevant, might be not - Socket.io@1.3.6

For a long time now I've been using node 0.12 (0.12.6 and 0.12.18 as of late) until I decided to upgrade server to latest version with coming node 8 release.

The partial reason for upgrade is this issue: nodejs/node#2813
Which I guess might be a reason for a memory leak we currently experience at 0.12, where memory usage tracking shows this behaviour:
a few minutes after start: rss250mb, heapTotal100mb, heapUsed~70mb (these are expected values throughout the lifetime of an app more or less).

Unfortunately as heapUsed keeps growing and being cleaned up, heapTotal and rss only grow:
heapUsed very rarely goes over 200mb when server has many connections, and it's always cleaned up to around 70mb if number of connections drops to minimum. heapTotal and rss do not drop and eventually (it may take a day, or multiple weeks, or never actually come to this level) memoryUsage snapshot: { rss: 773mb, heapTotal: 968mb, heapUsed: 100mb }

  • Node.js Version: 8.1.2
  • OS: Ubuntu 14.04.2 LTS
  • Scope: runtime
  • Module (and version) (if relevant): might be relevant, might be not - Socket.io@2.0.3

After upgrading app node version from 0.12 to 8.1.2 (installing all modules newest versions available), I deployed changes to a production server clone, where it went smoothly without a hitch. Unfortunately after doing the same on production server I've seen the memory leak I haven't seen yet in my life. App memory usage in matter of minutes grew up to 1.4GB - max allowed (including heapUsed). App kept being responsive all this time and afterwards, but I have a feeling it wasn't gonna be long until it had to be killed, thus I eventually rolled the changes back.

This kind of memory usage growth seems to be very unnatural to me (looks like some stupid recursion creating new objects every call and never deleting).

If I don't find anything better I'm planning to retry this whole thing on production and run --inspect next weekend to see if there is something suspicious in a heap, and there very well might be something I feel.

Another thing I thought about is trying to create a simpler version of an app reproducing the issue right on a production server. With a lot of luck it might end up being something small and easier to comprehend.

I'm just little bit chicken-hearted to mess up with production environment like this, and my attempts to reproduce issue on a clone of environment resulted in nothing but what seemed to be well managed memory.

I also thought about forwarding all traffic from production to its clone to completely duplicate load, but I'm afraid it's not gonna work as 0.12 node version app and 8.1 run on different major versions of socket.io and that traffic will most likely be incompatible with either of environments. Furthermore I have a suspicion that websocket traffic cannot be forwarded like this once the connection is established.

My hope lies in that perhaps it's a familiar behaviour for someone as is (memory usage growth is really remarkable), and you can guess/point a direction to what might be a reason for this. Maybe some modules or programming patterns (like recursion) are known for being able to cause this and if there are ways for me to help myself with this issue. Maybe you have better suggestions than running --inspect on production with an app consuming 1.4G out of 2G RAM and close to 100% CPU meanwhile (sounds scary).

Thank you very much for reading! Looking forward your replies!
Vasiliy

@glock18
Copy link
Author

glock18 commented Jun 21, 2017

Now that I posted this I've got an idea what could be a reason for this extreme memory usage growth, and the more I think about it the more it seems to be the case. It seems that actually socket.io major version upgrade (socket.io v2 not being backward-compatible) might be causing this as at the time of server restart with a new version of socket.io server there are definitely people with open old version socket.io clients. I will get right back to once I either confirm or deny this assumption!

@glock18
Copy link
Author

glock18 commented Jun 21, 2017

It seems that my last guess was right. Having only 7 clients using older socket.io client during a deployment, they've managed to load a server to about 30% CPU and 600mb RAM. Having said that I'm closing my own issue as it is not related to nodejs.

@glock18 glock18 closed this as completed Jun 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant