-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
src: properly configure default heap limits #25576
Conversation
What will this do if the total memory is small like something in a 128M cloud container? I think I've seen Node.js be able to have heaps which are bigger than the available memory (which with a default of 1.4G could often be the case for containers) and things still run ok because some parts are paged out. I'm wondering if this might cause some of those deployments to start failing? |
Also do you have a quick overview of how the size of the heap is configured based on the total memory. For example if I have 32GB of memory is it one for one so that its x% of 32G. If so what is x%? |
We probalby need to add at least a doc update somewhere that explains the behaviour change that people will see as part of this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NewIsolate is exposed to embedders, do they have any issues with the new default?
The details of how heap sizes this is computed presently by following the logic in ResourceConstraints::ConfigureDefaults. Right now, V8 still seems to have a hard-coded ceiling of 2GB on the old generation size. size_t Heap::ComputeMaxOldGenerationSize(uint64_t physical_memory) {
const size_t old_space_physical_memory_factor = 4;
size_t computed_size = static_cast<size_t>(physical_memory / i::MB /
old_space_physical_memory_factor *
kPointerMultiplier);
return Max(Min(computed_size, HeapController::kMaxSize),
HeapController::kMinSize);
} There is a floor of 256MB and a ceiling of 2048MB. To summarize the behavior change, given current implementation of V8 (restricting discussion to 64-bit builds):
With this spelled out, it is worth pausing and thinking about this more deeply. I do think we need change the default on large machines. I routinely see workloads where the job fails with 'out of memory' when there clearly is enough memory available. Note that this touches on the implementation details of the Heap in V8 which will likely change as V8 changes. My aim here was to provide V8 more information to configure the heap sizes more intelligently. Node.js unilaterally deciding the memory limits is likely not a good solution. Nor do we expect that default value hard-coded in V8 would match our use-cases. I'll reflect upon this more, and if people have opinions, it would be great to hear them. /cc @nodejs/v8. |
src/node.cc
Outdated
@@ -1813,6 +1813,14 @@ bool AllowWasmCodeGenerationCallback( | |||
Isolate* NewIsolate(ArrayBufferAllocator* allocator, uv_loop_t* event_loop) { | |||
Isolate::CreateParams params; | |||
params.array_buffer_allocator = allocator; | |||
|
|||
double totalMemory = uv_get_total_memory(); | |||
if (totalMemory > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two questions:
- is it safe to continue if
totalMemory < 0
i.e.uv_get_total_memory
returned an error? - semver-ity (in continuation of discussion thread). Maybe this should be
major
and so we can think less about nuanced implications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P.S. does this have priority over CLI parameters (e.g. --max-old-space-size
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P.S. does this have priority over CLI parameters (e.g. --max-old-space-size)?
No, the CLI flags take precedence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it safe to continue if totalMemory < 0 i.e. uv_get_total_memory returned an error?
I believe so. For whatever reason (security policy?) we don't know how much memory is available, but that doesn't imply memory allocation is going to fail.
Maybe we can code these limitation here (and keep this < semver-minor)? Or alternatively mark this as semver-major? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a counterargument but for your consideration:
One reason we never changed the defaults so far (it's come up a few times) is that node is often used in a multi-process setup. With that kind of workload you don't want to use physical memory as an input signal because you'll overcommit.
(It speaks in this PR's favor that it still has an upper limit in place so it's unlikely it'll go completely runaway.)
src/node.cc
Outdated
@@ -1813,6 +1813,14 @@ bool AllowWasmCodeGenerationCallback( | |||
Isolate* NewIsolate(ArrayBufferAllocator* allocator, uv_loop_t* event_loop) { | |||
Isolate::CreateParams params; | |||
params.array_buffer_allocator = allocator; | |||
|
|||
double totalMemory = uv_get_total_memory(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style: total_memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
src/node.cc
Outdated
@@ -1813,6 +1813,14 @@ bool AllowWasmCodeGenerationCallback( | |||
Isolate* NewIsolate(ArrayBufferAllocator* allocator, uv_loop_t* event_loop) { | |||
Isolate::CreateParams params; | |||
params.array_buffer_allocator = allocator; | |||
|
|||
double totalMemory = uv_get_total_memory(); | |||
if (totalMemory > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P.S. does this have priority over CLI parameters (e.g. --max-old-space-size)?
No, the CLI flags take precedence.
Setting the max old space limit is not necessarily the same as over-committing. It just means that we now allow old space can grow to that value if the processes really needs to consume that much memory. The cases where I expect the behavior changes is when we run memory hungry processes on an under-provisioned machine. In such a scenario, today, things would run out of memory and crash. We would change the behavior so that the programs run for a bit longer (on swap) before crashing. IMO, we are changing an error outcome to a slightly different error outcome. And, as you observe also, the ceiling of 2GB limits the runaway case for leaky/buggy programs. |
93966b2
to
0cb38e2
Compare
I think the new behavior is more sensible, specially on low-memory machines. I'm fine with calling this a semver-major. |
I am going to land this tomorrow unless objections show up. /cc @mhdawson |
|
CI (scheduled): https://ci.nodejs.org/job/node-test-pull-request/20470/ |
AIX flaked. Resume build: https://ci.nodejs.org/job/node-test-pull-request/20482/ |
AIX seems to be having infrastructure issues - Java errors. Another resume build: https://ci.nodejs.org/job/node-test-pull-request/20490/ |
We are not encountering an issue, Docker by default uses 2048 MB of memory. So our local builds no longer work. We now have to manually set |
@ofrobots regarding your comment:
In Angular CLI one of the most common problems we have is out of memory errors. Undoubtedly we could do better on our side to use less memory, but it's always surprising for our users to hit a hardcoded memory limit when their development and CI machines have a lot more available memory than that. We were delighted to hear that Node 12 would configure the limit based on available memory (as described in the blog post), so we told our users they should update to Node 12. But we weren't aware this still meant there was a new static limit. And although the new limit is still around 50% more than before, it looks like Node 12 uses ~30% more memory than Node 10. So users with bigger projects started having memory problem just from moving to Node 12. |
@filipesilva You're right. I've been working with the V8 team to get this fixed more comprehensively. Here's the V8 bug: https://bugs.chromium.org/p/v8/issues/detail?id=9306. There are now more ergonomic APIs for Node.js to use to configure the memory usage, and I expect we can start using these once these trickle into Node. It would be great to have some example projects / repros that consume large heaps that we can experiment with. I'll look into the links you provided. |
angular/angular-cli#13734 (comment) includes analysis of the repositories in https://github.com/filipesilva/angular-cli-perf-benchmark. You can find more detailed numbers there. The scripts in that repository benchmark a number of projects using CircleCI. You can also run these benchmarks locally with e.g. We've also been trying hard to reduce our memory usage on bigger projects but it's hard to do so, as the Chrome devtools for Node crashes when gathering CPU profiles and Heap timelines. This makes it harder to figure out where memory/cpu time is being spent. |
You may want to use a system with fewer moving parts to get the profiels, for example: npm.im/pprof is a node module that can gather (sampling) heap profiles and CPU profiles for you programmatically without involving DevTools or the Chrome DevTools Protocol. |
@ofrobots I did not know of Capturing the CPU profile using the node pprof was a breeze. The browser visualization for the CPU profile seems to fail to render for I had been trying the direct node inspector API for cpu profiles and heap snapshots, and then also tried modifying the API usage to take heap profiles in a similar way to your https://github.com/v8/sampling-heap-profiler. But chrome devtools crashes on heap snapshots larger that 2gigs so that doesn't help. Maybe pprof will handle these better. Anyways thanks a bunch for the tip! |
@ofrobots Thank you for opening google/pprof#475 |
@ofrobots I have some question. |
Unless configured, V8 defaults to limiting the max heaps size to 700 MB
or 1400MB on 32 and 64-bit platforms respectively. This default is
based on the browser use-cases and doesn't make a lot of sense
generally. This change properly configures the heap size based on
actual available memory.
This should reduce the number of instances where we run out of memory processing larger data-sets. It is still possible to pass
--max-old-space-size
to use a different limit.CI: https://ci.nodejs.org/job/node-test-pull-request/20203/CI: https://ci.nodejs.org/job/node-test-pull-request/20310/
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passes