Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tokio-threadpool: Better default behaviour #1040

Closed
prasannavl opened this issue Apr 6, 2019 · 2 comments
Closed

tokio-threadpool: Better default behaviour #1040

prasannavl opened this issue Apr 6, 2019 · 2 comments

Comments

@prasannavl
Copy link

prasannavl commented Apr 6, 2019

Currently, by default the following happens:

This is of course, all configurable. And the first one is also addressed by this: #427

I noticed this during: hyperium/hyper#1790, so thought I'll open up one here for the behaviour.

Learnings from outside Rust

  • While the defaults need not be perfect, I'd like it to be based on some alignment with use cases for a start - And .NET seems like a good example that has done async right for a long time now. While it's been a while since I've used it, if I remember correctly, 30 seconds is default that Microsoft used in the their general purpose threadpool to keep them alive while being idle after their case studies (I imagine!) on finding out an optimal time.
  • That being said, it requires some studies to do this optimally in Rust, since the model is a bit different. In .NET IO is handled by different set of workers (usually limited to the number of cores by default), and Task (Rust future's task equivalent) also allow a flag to control whether it's long running or short running depending on which the pool can make smarter decisions), and the default number of threads being set to x * number of logical cores.

Potential direction

  • Start with 30 seconds as the default keep alive timeout.
  • Start with say, 16 * number of logical cores for the maximum number of backup threads that are used by the blocking function? The fixed 100 threads doesn't seem to be a great default, on say single core systems or low memory embedded systems, or on server systems with a massive amount of cores. While a multiplier of 8-16, I think would make a good reasonable default for maximum blocking threads
  • Expand maximum worker pool size by a smaller multiplier of the logical cores, say 2 * cores to provide room for some APIs like this future-lock: Mutex Line:160 that unfortunately does seem lock for extremely short period of time. While it would be ideal for us to not have these, these do exist, and quite possibly will continue to for a long time. Expanding the max pool gives some room here.
  • Ideally it would be great if there's a cleanup mechanism for worker pool as well, so that it can maintain a thread number equal to the logical number of cores, but the 2x multiplier can then moved to be 4x and have that been cleaned up and keep-alive duration.

Scenarios where threads have to be retained, they can always configure it to be so. This way, it's more atuned to a dynamic workload than a static one - which I'd think fits for a good number of web scenarios.

@prasannavl prasannavl changed the title Better default behaviour for the threadpool tokio-threadpool: Better default behaviour Apr 6, 2019
@carllerche carllerche assigned ghost Apr 9, 2019
@carllerche
Copy link
Member

Could benchmarks be used to drive these changes?

@ghost ghost removed their assignment Jun 30, 2019
@carllerche
Copy link
Member

Closing this as the scheduler & thread behavior has completely changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants