-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
node only using half of the available memory #35573
Comments
Yeah, that seems fair. I’m wondering if there’s a good way to detect whether we’re in such an environment, given that docker containers can also do a lot of different things, including running multiple processes that need to share memory. |
I don't think there is a reliable way to know that the Node process is going to be the only one to use the host's memory |
Maybe |
The problem is that |
Running as PID=1 would be a decent metric. No system doing that would survive the process running out of heap space anyway. Since this is a maximum and not a fixed allocation, processes can still share memory, to a limit. There’s no really great default here. No matter what the heap limit is set to, a multiple process environment could still overexhaust available memory - three Node.js processes each having their heap limit set to the default 50% of available RAM could do that. |
My primary concern (and I believe a common use case) is in an orchestration system (ECS, Kubernetes, etc) where the service is having nodes come up and down based on need. Ideally, the memory available to the process is controlled by the orchestration system, but currently only half the memory is used unless |
I’d still argue that it would be better for Node.js to use all available memory inside Docker by default, since this affects everyone who uses Node with Docker memory limits, and it would be a huge time saver for them to not have to run into this problem and figure out that they need to set |
I agree completely and would LOVE it to be the default (at least when running in a container), but if there's a concern about it being a "breaking change" or something like that, then I'd at least like to have a way to turn that on. |
A key challenge with allocating so much memory to the v8 heap is that there's a good deal of memory allocated in Node.js that is outside of that heap. Also consider that every worker thread has its own v8 heap. It's going to be very difficult to generalize an approach that works consistently. |
Agreed that worker threads throw a complication in there, but the current method doesn't really address that concern either and I don't think that the ability to add more complexity through worker threads should prevent the default behavior from being improved and taking advantage of the available resources |
One way around this is is to have a base image that asks the orchestration layer for the limit at container startup time (eg. part of a startup script) and dynamically set the (though a better default for PID=1 node processes, or some other standard container environment detection would be even better!) |
What steps will reproduce the bug?
docker run --rm -it --memory=1G --memory-swap=1G node:12.19.0-buster-slim bash
node -e 'const v8 = require("v8"); console.log(v8.getHeapStatistics());'
)How often does it reproduce? Is there a required condition?
100%
What is the expected behavior?
node
would use all available memoryWhat do you see instead?
node
is only using half of the memory, so we either need to give the container twice as much memory as we really want it to use (which is problematic) or also specify--max-old-space-size
in addition to controlling the container which means we have to keep 2 different settings in sync and that's a big painAdditional information
It appears that it was changed here, and only using half of memory makes sense when running on a system with multiple process, but when running in a docker container, using all of the memory allocated to the container makes more sense
The text was updated successfully, but these errors were encountered: