-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create heapdump on out of memory #27552
Comments
I'm not very keen to have this in Node.js core at the moment because it usually takes a lot of memory to take a heap snapshot. If the system is already on a V8 OOM situation, trying to take a heap snapshot can lead to an operating system OOM (which might kill other important processes). If we could drastically reduce the memory used to take the heap snapshot, I would be happy to see this feature introduced in core. |
It is possible to execute JS at that point, but in general I don't think it's a good idea to open this opportunity up, as when there is a fatal error we need to be very careful about what we execute (it's similar to signal handlers in some way).
This looks more promising (or just provide heap snapshots as one of the actions that can be specified to be done when a fatal error occurs, as we already make it possible to trigger node-report in the fatal error handler), though as @mmarchini points out it's at the users' own risk if they want to do that. |
I would be happy to go for option 2, to make it another configurable cli option. Memory use of the heapdump is indeed a risk, depending on how you use it. |
I'd agree that a CLI flag makes sense, we should look at the existing option for generating a node-report and make it consistent with that. |
@paulrutter if you make a snapshot you still will not have anything to compare with, because you need at least one more snapshot. I can assume that the answer to the question: "why do you need a heapdump on OOM" is "inspect state of memory". For this case you can use flag --abort-on-uncaught-exception and create a coredump (stack trace + heapdump) on "process abort". And then you can explore the memory with llnode. It may take a little longer, but it definitely works. |
@matvi3nko Comparison is only needed when you suspect a memory leak, but this is not always the reason of an out of memory situation. More often, the code being executed is not memory efficient (for example: reading a whole file in memory at once instead of streaming). A single heapdump would show such issues, without the need for comparison. I tried llnode in the past and found it not very user friendly to use. Of course this is more of an experience issue at my end, but still i think it would be beneficial to other Node.js users to have a more entry-level heapdump generation process in place. |
With the latest Node.js 12.11.1, the
The API's for creating a heapdump do not have seem to be changed. It seems that calling "createHeapSnapshot" no longer works in the context that it did before. Is there any progress made on adding the functionality to Node.js core? |
Does it work on v12.10? |
No, it doesn't. Same behavior. |
What about 12.0.0, 12.3.0 and 12.5.0 (those are the V8 bumps during Node.js v12)? If the issue is on V8 it's good to narrow down which version it started. -- Also, you should be able to get a core dump if this is throwing a Fatal error. This will allow you to print the native call stack, which should help finding the issue. To generate a core dump, run:
And then open it with gdb or lldb to get the stack trace:
Post the EDIT: Don't post the core dump, it's a bad idea 😅 |
Thanks, will try to narrow the issue down and come back with the results. |
I looked into the issue, and found out that on Node.js 12 (doesn't matter which minor version) the node-oom-heapdump module works well as long as the following flags are not used:
When these flags are used, the behavior is a bit unpredictable.
So, i'm not sure if i need to follow up on this. |
Found this issue while searching for a solution similar to java heap dump on oom While llnode may do all the required things, it's not the tool that is familiar to JS developers, whereas dev tools are. And as for heapdump generation, I think that double memory requirement is not an option for general usage. If your process is already flagged to be terminated, there should be a way to stop the world and stream heap contents directly to fs without creating an intermediate object. |
Heapdumps are essentially graphs of the live objects on the heap. Creating that graph is what takes up a lot of memory. It's unavoidable. A coredump doesn't have that problem because it doesn't create a graph, it simply dumps the heap as a byte array. One possible way forward is to create a tool (possibly integrated into the node binary) for transforming a coredump into a heapdump. I wrote a tool like that for Node.js v0.10 once but that's totally antiquated by now, of course. :-) |
I think it is really a way to go But after playing some time with core dump generation, I would say that not only tooling, but core dump generation for process that runs inside of container is an extremely difficult thing to get right, and the main issue here is configuring core dump location which can be done only on the host machine and is probably not an option. Until core_pattern namespacing is not implemented in kernel |
Node.js doesn't have direct access to the heap, so generating a heapdump is not something we can do out of the box. This is a feature request for V8 (https://bugs.chromium.org/p/v8/issues/list). |
I've failed to find this issue and created #32756 (already closed it). A huge +1 from me (and, as I believe, from many node users) for the CLI flag option. @paulrutter did you already open an issue against V8? |
@puzpuzpuz No, i haven't gotten to it yet. |
As the |
Open-ended questions like that aren't useful. When is there not room for improvement? The 2x is a rule of thumb - i.e., a decent assumption, not a hard rule. The lower bound for most programs is about 33% (1 pointer per object where the average object is 3 pointers big.) But before someone goes "oh, so it's only one-third": lower bound != average. |
@bnoordhuis - according to me, the effort required to generate a dump involves traversing the object graph and recording the reference tree and the size information (for example our own I don't know how the snapshot is collected, that is why I used |
Maybe partially. Probably not easily. The current heap snapshot generator uses additional memory because it creates persistent snapshots. They remain valid after they're created; e.g., JS strings are copied. A zero-copy, one-shot generator is conceivable but computing the graph edges is still going to require additional memory.
|
@bnoordhuis - understood all other points, thanks.
agree, but that implies the generator footprint is a function of number of objects in the heap and their cross-references, not on the size of the heap? |
I don't really see much way around the memory requirements with the current snapshot approach. We'd really need to move to a heap tracing model that can stream out events as they happen and that would allow a graph to be build via post processing. |
Note that I was careful to write "live objects on the heap" in #27552 (comment). :-) In an out-of-memory condition they're roughly equivalent though. The heap is so full with live objects that there isn't room for more. |
V8 provides v8::Isolate::AddNearHeapLimitCallback() for adjusting the heap limit when V8 is approaching it. The debugger implementation in V8 uses this to break at the point where the heap size limit is near in the Chrome DevTools. I did a proof of concept
Any comments about these observations? |
@joyeecheung
It would be interesting to know if your WiP can handle the volatile increase of memory usage. |
@paulrutter Thanks, fro what I can tell, using the NearHeapLimitCallback should work for the more volatile increase, and it should work better than the OOM handler, since the OOM handler is triggered when V8 is about to crash, whereas NearHeapLimitCallback is, as the name implies, triggered sometime before that, when there's still some room in the heap. The snapshot writing would be synchronous, so it's guaranteed that at least one snapshot will be written before the program crashes - whether there will be more depends on how fast the heap grows and how much room we leave for V8/V8 leaves for us before the callback is invoked the next time, since snapshot generation triggers GC which in term, might increase heap usage (while promoting objects in young generation, but not for the snapshot generation itself like what we had been worried about in this thread). |
Also, as discussed in #33010 (comment) having this implemented in Node.js core, instead of as an addon, may help us avoid the situation where the snapshot generation triggers a system OOM due to the additional native memory overhead, because our own implementation have access to the parameter used initially to configure the V8 heap, so we can do some calculations with the information to avoid this as much as we can. |
Today I tried to run the module recommended here in the first comment (https://github.com/blueconic/node-oom-heapdump) while restricting the memory for the old heap to When crashing the process's memory was rising to up to 500mb and needed about 7 minutes to gather the heap dump. This makes me question this technique asking myself why I couldn't simply use a core-dump here instead. Is there a way to create a core dump when running out of memory and later on (on a machine with enough resources available) "transform" it into a heap dump? |
@SimonSimCity We're using that module for node processes restricted between 80 and 160MB, and when one of those crashes it never takes more than a few seconds to create the heapdump. Yes, core dumps can be created already, by passing the |
I tested it on an application our company is working on, which is a Meteor project running in development mode. Maybe the generated object graph, as mentioned in #27552 (comment) is very complex there ... It was very often quoted that this is requires a significant amount of additional memory (#27552 (comment), nodejs/diagnostics#239 (comment), nodejs/diagnostics#239 (comment), #27552 (comment)) - some of them also mentioning performance as a problematic factor. Heap snapshots also seem to be problematic when the heap size is high: nodejs/diagnostics#239 (comment) (the ticket linked in the comment mentions a size of >1.5 GB) If a core dump, automatically generated by the OS, can help us here, this should be used preferably in my opinion. At least on Linux it seems to be provided at almost no memory cost. Windows and other OSes might have different format or even different approach here, but I'd rather go this direction first than a solution which requires significantly more memory. I'll test out some options here regarding core dumps as I know my application will only run inside a Linux based docker container. |
This patch adds a --heapsnapshot-near-heap-limit CLI option that takes heap snapshots when the V8 heap is approaching the heap size limit. It will try to write the snapshots to disk before the program crashes due to OOM. PR-URL: #33010 Refs: #27552 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
This patch adds a --heapsnapshot-near-heap-limit CLI option that takes heap snapshots when the V8 heap is approaching the heap size limit. It will try to write the snapshots to disk before the program crashes due to OOM. PR-URL: #33010 Refs: #27552 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
Closing this now that the command line flag has landed |
Will this land in Node.js 14.x? |
This patch adds a --heapsnapshot-near-heap-limit CLI option that takes heap snapshots when the V8 heap is approaching the heap size limit. It will try to write the snapshots to disk before the program crashes due to OOM. PR-URL: #33010 Refs: #27552 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
This patch adds a --heapsnapshot-near-heap-limit CLI option that takes heap snapshots when the V8 heap is approaching the heap size limit. It will try to write the snapshots to disk before the program crashes due to OOM. PR-URL: #33010 Refs: #27552 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
This patch adds a --heapsnapshot-near-heap-limit CLI option that takes heap snapshots when the V8 heap is approaching the heap size limit. It will try to write the snapshots to disk before the program crashes due to OOM. PR-URL: #33010 Refs: #27552 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
This patch adds a --heapsnapshot-near-heap-limit CLI option that takes heap snapshots when the V8 heap is approaching the heap size limit. It will try to write the snapshots to disk before the program crashes due to OOM. PR-URL: nodejs#33010 Refs: nodejs#27552 Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Richard Lau <rlau@redhat.com> Reviewed-By: Gireesh Punathil <gpunathi@in.ibm.com>
Has anyone created a corresponding issue in v8? While I think it would be useful to extract heapdumps from cores, llnode is already quite useful for exploring core dumps. What we really need is a streaming heap writer that simply serializes the heap without allocating (much) additional memory, preferably without stopping the world but pausing the garbage collector as it iterates through the heap. It could be possible to implement this out of band in a separate module and will require code to convert the intermediate format into a compatible heapsnapshot, but this would be easier/more future proof than trying to extract the heap from a corefile. It is probably(?) ok for there to be inconsistencies in the heap snapshot as new heap objects are allocated, but the important part is that we don't want the graphs built in memory when they can be built out of band once the raw heap structure is serialized to a file. Stopping the world to take a heap snapshot might be easier but there are situations where at runtime we want to debug a leak without adding latency to inflight requests, and both core dumps and heap dumps are nontrivially disruptive to the running process. |
Introduction
This issue is a followup of #23328. The outcome of that issue was the introduction of the
v8.writeHeapSnapshot()
API.Next step would be to introduce a way of handling an out of memory situation in JS context.
When running nodejs processes in a low memory environment, every out of memory that occurs is interesting. To figure out why a process went out of memory, a heapdump can help a lot.
Desired solution
There are several possible solutions which would suffice:
process.on('fatal_error')
, which kicks in for an OoM event. (see Post-mortem out of memory analysis diagnostics#239 (comment)). The question is if it's feasible to execute JS code after the 'fatal_error' occurred?Alternatives
At the moment, we use our own module. This uses native code to hook into the
SetOOMErrorHandler
handler of V8.This works although it's not very elegant.
The text was updated successfully, but these errors were encountered: