-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stream: bump default highWaterMark #46608
Conversation
Review requested:
|
TBH I would like to bump it even further to 128k but I don't think I will get consensus on that one. |
Failed to start CI- Validating Jenkins credentials ✖ Jenkins credentials invalidhttps://github.com/nodejs/node/actions/runs/4145265543 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-1 I don't think we should do this. Node runs on all kinds of setups, including more resource-constrained hardware. The current defaults work well memory usage-wise in either kind of setup.
I think a better solution would be to allow end users to set something like Stream.defaultObjectWaterMark
/Stream.defaultWaterMark
which would default to the current 16/16KB respectively.
This way users could just set this at the top of their main script and get the same effect if they want to take on that kind of potential extra memory consumption.
My memory tells me that this was already proposed in the past but I can't find the PRs/issues. I kind of agree with @mscdex. Instead of changing the default for everyone, make it customizable. |
The problem with that is that most users will not bother/dare/known how to set it. So we are slowing it down for everyone to be compatible with a few... not saying that's wrong, but is there any middle ground? Can we e.g. check the total available system memory and base it on that? Messing around with hwm and Buffer.poolSize is kind of advanced user usage. |
The properties I proposed would be in the documentation, which I think developers would already be looking at (especially if they are at the point where the stream high water mark is actually becoming a bottleneck), so I don't see this as an issue.
It could be argued both ways. I don't think it's fair to say that everyone is running node in environments where memory usage is not a concern.
I believe what I proposed is a middle ground. Besides, we already employ similar user-tweakable limits throughout node core, so this would just be making things more consistent in that regard.
I don't think there is any realistic way of doing something like this as that'd be making the assumption that the node process is the only thing running on the OS that is using any considerable amount of resources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree there is a tradeoff here and nuance but I think we're currently too conservative and so agree with this change. Constrained systems can usually override this.
@ronag unrelated to contents of this PR but why the removal of
needs-ci
|
Increasing the highWaterMark is going to finish the process faster and thus, could actually lead to less memory pressure for high load scenarios.
Our defaults are always concern for debate and we will likely never have the correct default for everyone. What we should IMO strive for is a default that is best for most users, so that only a few have to change the them. Many users are not aware of these things and only check for them, in case they know, they have to be cautious (what I would expect for users running in an environment where memory is a concern). Using a heuristic where we check for the available memory sounds like a good idea to me. That way we will likely have the right setting for more users than we have currently. One difficult part might be to define the thresholds and what memory to look at (overall memory vs. free memory). For now, I would just use |
Heuristics can't tell how many parallel streams will be created and what the intended purpose of the available memory is. I like that streams slow down and leave room. I also prefer baby steps here. The effect of increasing a hardcoded default is relatively easy to measure and doesn't mess with user expectations that much, compared to heuristics. Increasing the hwm every so often (and gradually) can improve performance for a majority of users, while still giving others a chance to test it out (even unknowingly). |
We don't land anything without CI. Don't worry. I just missed adding the request-ci label. |
This increases the HWM by eight times while leaving the default for object mode unchanged. It will increase memory usage by eight times on a proxy that pipes data to a slower destination. That is almost an order of magnitude. FWIW we are using 64 KiB for file system streams. |
This would require a rebase I think? removed the author ready for the time being |
for reference: #27121 |
This needs a rebase. |
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit.
a480e09
to
b01b94e
Compare
Quite a few CI failures that would need to be addressed. |
@ronag do you need help with that? Would be great to have this on v21 proposal. |
Feel free to take over. |
@ronag Apparently, I don't have push permissions to this PR (nor nxtedition fork). I'll create another PR cherry-picking this commit. |
FYI: In kubernetes with flannel, 256k and 512k only increase throughput on red hat linux 8 with defaults. any other values make no difference a value higher than 600k starts to drop throughput, see also test scenario streaming more than 200MB, the 16k is unable to get the TCP state machine working: https://github.com/orgs/nodejs/discussions/51082 |
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: #52037 Refs: #46608 Refs: #50120 Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Yagiz Nizipli <yagiz.nizipli@sentry.io> Reviewed-By: Chengzhong Wu <legendecas@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il> Reviewed-By: Mohammed Keyvanzadeh <mohammadkeyvanzade94@gmail.com> Reviewed-By: Trivikram Kamat <trivikr.dev@gmail.com> Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120 Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Yagiz Nizipli <yagiz.nizipli@sentry.io> Reviewed-By: Chengzhong Wu <legendecas@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il> Reviewed-By: Mohammed Keyvanzadeh <mohammadkeyvanzade94@gmail.com> Reviewed-By: Trivikram Kamat <trivikr.dev@gmail.com> Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
This should give a performance boost accross the board. Given that the old limit is a decod old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit. PR-URL: nodejs#52037 Refs: nodejs#46608 Refs: nodejs#50120 Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com> Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Yagiz Nizipli <yagiz.nizipli@sentry.io> Reviewed-By: Chengzhong Wu <legendecas@gmail.com> Reviewed-By: Moshe Atlow <moshe@atlow.co.il> Reviewed-By: Mohammed Keyvanzadeh <mohammadkeyvanzade94@gmail.com> Reviewed-By: Trivikram Kamat <trivikr.dev@gmail.com> Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
This should give a performance boost accross the board at the cost of slightly higher memory usage.
Given that the old limit is a decode old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit.