Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream: bump default highWaterMark #46608

Closed
wants to merge 1 commit into from

Conversation

ronag
Copy link
Member

@ronag ronag commented Feb 10, 2023

This should give a performance boost accross the board at the cost of slightly higher memory usage.

Given that the old limit is a decode old and memory capacity has doubled many times since I think it is appropriate to slightly bump the default limit.

@ronag ronag added the stream Issues and PRs related to the stream subsystem. label Feb 10, 2023
@ronag ronag requested a review from mcollina February 10, 2023 15:12
@nodejs-github-bot
Copy link
Collaborator

Review requested:

  • @nodejs/streams

@nodejs-github-bot nodejs-github-bot added the needs-ci PRs that need a full CI run. label Feb 10, 2023
@ronag
Copy link
Member Author

ronag commented Feb 10, 2023

TBH I would like to bump it even further to 128k but I don't think I will get consensus on that one.

@mcollina mcollina added semver-minor PRs that contain new features and should be released in the next minor version. dont-land-on-v14.x dont-land-on-v18.x PRs that should not land on the v18.x-staging branch and should not be released in v18.x. baking-for-lts PRs that need to wait before landing in a LTS release. and removed dont-land-on-v14.x dont-land-on-v18.x PRs that should not land on the v18.x-staging branch and should not be released in v18.x. labels Feb 10, 2023
@mcollina
Copy link
Member

I've put in the "baking for lts" label to wait a bit to backport in case of issues.

FWIW I think bumping this to 128KB is acceptable. It would help reduce overhead.

cc @mscdex @cjihrig

@ronag ronag added request-ci Add this label to start a Jenkins CI on a PR. and removed needs-ci PRs that need a full CI run. labels Feb 10, 2023
@github-actions github-actions bot added request-ci-failed An error occurred while starting CI via request-ci label, and manual interventon is needed. and removed request-ci Add this label to start a Jenkins CI on a PR. labels Feb 10, 2023
@github-actions
Copy link
Contributor

Failed to start CI
- Validating Jenkins credentials
✖  Jenkins credentials invalid
https://github.com/nodejs/node/actions/runs/4145265543

mscdex
mscdex previously requested changes Feb 10, 2023
Copy link
Contributor

@mscdex mscdex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-1 I don't think we should do this. Node runs on all kinds of setups, including more resource-constrained hardware. The current defaults work well memory usage-wise in either kind of setup.

I think a better solution would be to allow end users to set something like Stream.defaultObjectWaterMark/Stream.defaultWaterMark which would default to the current 16/16KB respectively.

This way users could just set this at the top of their main script and get the same effect if they want to take on that kind of potential extra memory consumption.

@lpinca
Copy link
Member

lpinca commented Feb 10, 2023

My memory tells me that this was already proposed in the past but I can't find the PRs/issues. I kind of agree with @mscdex. Instead of changing the default for everyone, make it customizable.

@ronag
Copy link
Member Author

ronag commented Feb 10, 2023

The problem with that is that most users will not bother/dare/known how to set it. So we are slowing it down for everyone to be compatible with a few... not saying that's wrong, but is there any middle ground? Can we e.g. check the total available system memory and base it on that?

Messing around with hwm and Buffer.poolSize is kind of advanced user usage.

@mscdex
Copy link
Contributor

mscdex commented Feb 10, 2023

The problem with that is that most users will not bother/dare/known how to set it.

The properties I proposed would be in the documentation, which I think developers would already be looking at (especially if they are at the point where the stream high water mark is actually becoming a bottleneck), so I don't see this as an issue.

So we are slowing it down for everyone to be compatible with a few... not saying that's wrong

It could be argued both ways. I don't think it's fair to say that everyone is running node in environments where memory usage is not a concern.

but is there any middle ground?

I believe what I proposed is a middle ground. Besides, we already employ similar user-tweakable limits throughout node core, so this would just be making things more consistent in that regard.

Can we e.g. check the total available system memory and base it on that?

I don't think there is any realistic way of doing something like this as that'd be making the assumption that the node process is the only thing running on the OS that is using any considerable amount of resources.

Copy link
Member

@benjamingr benjamingr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree there is a tradeoff here and nuance but I think we're currently too conservative and so agree with this change. Constrained systems can usually override this.

@LiviaMedeiros
Copy link
Contributor

@ronag unrelated to contents of this PR but why the removal of needs-ci PRs that need a full CI run. label?
Correct me if I'm wrong but AFAIR it's a safety measure to prevent accidental merging of PRs with failing CI: https://github.com/nodejs/node-core-utils/blob/267dde0692d42e2482d133d4cccfcae13586ad89/lib/pr_checker.js#L429-L440
So this should be always kept (arguably with exception of doc-only PRs, trivial linting, etc.)

@BridgeAR
Copy link
Member

Can we e.g. check the total available system memory and base it on that?

I don't think there is any realistic way of doing something like this as that'd be making the assumption that the node process is the only thing running on the OS that is using any considerable amount of resources.

Increasing the highWaterMark is going to finish the process faster and thus, could actually lead to less memory pressure for high load scenarios.

So we are slowing it down for everyone to be compatible with a few... not saying that's wrong

It could be argued both ways. I don't think it's fair to say that everyone is running node in environments where memory usage is not a concern.

Our defaults are always concern for debate and we will likely never have the correct default for everyone. What we should IMO strive for is a default that is best for most users, so that only a few have to change the them. Many users are not aware of these things and only check for them, in case they know, they have to be cautious (what I would expect for users running in an environment where memory is a concern).

Using a heuristic where we check for the available memory sounds like a good idea to me. That way we will likely have the right setting for more users than we have currently. One difficult part might be to define the thresholds and what memory to look at (overall memory vs. free memory). For now, I would just use process.constrainedMemory() and default to the lower default in case undefined is returned.

@vweevers
Copy link
Contributor

Heuristics can't tell how many parallel streams will be created and what the intended purpose of the available memory is. I like that streams slow down and leave room.

I also prefer baby steps here. The effect of increasing a hardcoded default is relatively easy to measure and doesn't mess with user expectations that much, compared to heuristics. Increasing the hwm every so often (and gradually) can improve performance for a majority of users, while still giving others a chance to test it out (even unknowingly).

@ronag
Copy link
Member Author

ronag commented Feb 11, 2023

@ronag unrelated to contents of this PR but why the removal of needs-ci PRs that need a full CI run. label?

Correct me if I'm wrong but AFAIR it's a safety measure to prevent accidental merging of PRs with failing CI: https://github.com/nodejs/node-core-utils/blob/267dde0692d42e2482d133d4cccfcae13586ad89/lib/pr_checker.js#L429-L440

So this should be always kept (arguably with exception of doc-only PRs, trivial linting, etc.)

We don't land anything without CI. Don't worry. I just missed adding the request-ci label.

@lpinca
Copy link
Member

lpinca commented Feb 11, 2023

This increases the HWM by eight times while leaving the default for object mode unchanged. It will increase memory usage by eight times on a proxy that pipes data to a slower destination. That is almost an order of magnitude.

FWIW we are using 64 KiB for file system streams.

@debadree25 debadree25 removed the commit-queue-failed An error occurred while landing this pull request using GitHub Actions. label May 6, 2023
@debadree25
Copy link
Member

This would require a rebase I think? removed the author ready for the time being

@marco-ippolito
Copy link
Member

for reference: #27121

@aduh95
Copy link
Contributor

aduh95 commented Sep 29, 2023

This needs a rebase.

This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.
@benjamingr benjamingr added the request-ci Add this label to start a Jenkins CI on a PR. label Sep 29, 2023
@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Sep 29, 2023
@nodejs-github-bot
Copy link
Collaborator

@aduh95
Copy link
Contributor

aduh95 commented Sep 30, 2023

Quite a few CI failures that would need to be addressed.

@RafaelGSS
Copy link
Member

@ronag do you need help with that? Would be great to have this on v21 proposal.

@ronag
Copy link
Member Author

ronag commented Oct 8, 2023

Feel free to take over.

@RafaelGSS
Copy link
Member

@ronag Apparently, I don't have push permissions to this PR (nor nxtedition fork). I'll create another PR cherry-picking this commit.

@constantind
Copy link

constantind commented Dec 9, 2023

FYI: In kubernetes with flannel, 256k and 512k only increase throughput on red hat linux 8 with defaults. any other values make no difference a value higher than 600k starts to drop throughput, see also test scenario streaming more than 200MB, the 16k is unable to get the TCP state machine working: https://github.com/orgs/nodejs/discussions/51082

ronag added a commit to nxtedition/node that referenced this pull request Mar 10, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

Refs: nodejs#46608
Refs: nodejs#50120
ronag added a commit to nxtedition/node that referenced this pull request Mar 10, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
ronag added a commit to nxtedition/node that referenced this pull request Mar 10, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
ronag added a commit to nxtedition/node that referenced this pull request Mar 11, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
ronag added a commit to nxtedition/node that referenced this pull request Mar 11, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
ronag added a commit to nxtedition/node that referenced this pull request Mar 13, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
ronag added a commit to nxtedition/node that referenced this pull request Mar 13, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
nodejs-github-bot pushed a commit that referenced this pull request Mar 13, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: #52037
Refs: #46608
Refs: #50120
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Yagiz Nizipli <yagiz.nizipli@sentry.io>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Mohammed Keyvanzadeh <mohammadkeyvanzade94@gmail.com>
Reviewed-By: Trivikram Kamat <trivikr.dev@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
rdw-msft pushed a commit to rdw-msft/node that referenced this pull request Mar 26, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Yagiz Nizipli <yagiz.nizipli@sentry.io>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Mohammed Keyvanzadeh <mohammadkeyvanzade94@gmail.com>
Reviewed-By: Trivikram Kamat <trivikr.dev@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
jcbhmr pushed a commit to jcbhmr/node that referenced this pull request May 15, 2024
This should give a performance boost accross the board.

Given that the old limit is a decod old and memory capacity has
doubled many times since I think it is appropriate to slightly bump
the default limit.

PR-URL: nodejs#52037
Refs: nodejs#46608
Refs: nodejs#50120
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Yagiz Nizipli <yagiz.nizipli@sentry.io>
Reviewed-By: Chengzhong Wu <legendecas@gmail.com>
Reviewed-By: Moshe Atlow <moshe@atlow.co.il>
Reviewed-By: Mohammed Keyvanzadeh <mohammadkeyvanzade94@gmail.com>
Reviewed-By: Trivikram Kamat <trivikr.dev@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
semver-major PRs that contain breaking changes and should be released in the next major version. stream Issues and PRs related to the stream subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.