-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incomplete heap dump? #62
Comments
duplicate of #50 |
If you use 0.12 and perform https requests from the server (for example), you might be hit by something like nodejs/node#1522 (fixed in 1.8.2) — that's outside of the heap and is not visible in heapUsed and/or heap snapshots. Though that example didn't raise the memory usage above 150 MiB when I tested, the main point here is that you should note that heapsnapshots do not measure all the memory. |
Gah. We do perform https requests from the server. This still happens in Node 0.12.7? I'd rather not migrate to iojs. I have updated, adding |
@jspavlick Did there came any negative things up since you're running your application with |
@BernhardBezdek not sure if I can answer your question very well. I haven't done any extensive profiling, but everything seems to be fine. (Yes I know, I am a bad person haha) I have done some timing.
And sometimes the GC takes really long. Like, over 1000ms. If the process is in the middle of servicing an HTTP request, that's going to kill the response time, since the GC locks the process until it's finished, as you said. Now, I'm not sure if the "system-invoked" GC also takes over 1000ms. If so, well, no difference. If not...then I am sure I am increasing my 99 percentile response time. The alternative, for me at least, was to manually reboot the process every ~8 hours. If I would forget, it would hit Heroku out of memory errors and requests would start timing out left and right. So for me at least, this is the lesser of the two evils. |
@jspavlick Thanks for your response. I'm actually using garbage collector too and have a better memory allocation but an comment on stackoverflow was against the gc: *Running with --expose-gc and calling gc() periodically is bad practice. V8 collects garbage as needed via incremental marking and lazy sweeping. A suitable way to handle the behavior above * Now after one more day analyzing and optimizing I found two main issues... For first analysis I stored memoryUsage in a csv and visualized it in google docs.
|
I'm having trouble figuring out a memory leak. Both New Relic and Heroku are reporting that I am hitting the Heroku memory quota of 512MB.
Looks like a leak alright...
However, my post-boot (i.e. when no leaks should have occurred) and pre-reboot (i.e. in full leak-mode) dumps are...less than helpful...
According to heap dump the heap has grown....7MB? I don't think so....?
Am I doing something wrong? I'm running Node 0.12.7 on Heroku.
The text was updated successfully, but these errors were encountered: