You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is of course, all configurable. And the first one is also addressed by this: #427
I noticed this during: hyperium/hyper#1790, so thought I'll open up one here for the behaviour.
Learnings from outside Rust
While the defaults need not be perfect, I'd like it to be based on some alignment with use cases for a start - And .NET seems like a good example that has done async right for a long time now. While it's been a while since I've used it, if I remember correctly, 30 seconds is default that Microsoft used in the their general purpose threadpool to keep them alive while being idle after their case studies (I imagine!) on finding out an optimal time.
That being said, it requires some studies to do this optimally in Rust, since the model is a bit different. In .NET IO is handled by different set of workers (usually limited to the number of cores by default), and Task (Rust future's task equivalent) also allow a flag to control whether it's long running or short running depending on which the pool can make smarter decisions), and the default number of threads being set to x * number of logical cores.
Potential direction
Start with 30 seconds as the default keep alive timeout.
Start with say, 16 * number of logical cores for the maximum number of backup threads that are used by the blocking function? The fixed 100 threads doesn't seem to be a great default, on say single core systems or low memory embedded systems, or on server systems with a massive amount of cores. While a multiplier of 8-16, I think would make a good reasonable default for maximum blocking threads
Expand maximum worker pool size by a smaller multiplier of the logical cores, say 2 * cores to provide room for some APIs like this future-lock: Mutex Line:160 that unfortunately does seem lock for extremely short period of time. While it would be ideal for us to not have these, these do exist, and quite possibly will continue to for a long time. Expanding the max pool gives some room here.
Ideally it would be great if there's a cleanup mechanism for worker pool as well, so that it can maintain a thread number equal to the logical number of cores, but the 2x multiplier can then moved to be 4x and have that been cleaned up and keep-alive duration.
Scenarios where threads have to be retained, they can always configure it to be so. This way, it's more atuned to a dynamic workload than a static one - which I'd think fits for a good number of web scenarios.
The text was updated successfully, but these errors were encountered:
prasannavl
changed the title
Better default behaviour for the threadpool
tokio-threadpool: Better default behaviour
Apr 6, 2019
Currently, by default the following happens:
Threads in the pool don't ever go back down by default as it's set to
None
here: https://github.com/tokio-rs/tokio/blob/master/tokio-threadpool/src/builder.rs#L99100 threads are the default maximum amount of blocking threads on any system regardless of the number of core or system capacity due to this: https://github.com/tokio-rs/tokio/blob/master/tokio-threadpool/src/builder.rs#L101
This is of course, all configurable. And the first one is also addressed by this: #427
I noticed this during: hyperium/hyper#1790, so thought I'll open up one here for the behaviour.
Learnings from outside Rust
30 seconds
is default that Microsoft used in the their general purpose threadpool to keep them alive while being idle after their case studies (I imagine!) on finding out an optimal time.Task
(Rust future'stask
equivalent) also allow a flag to control whether it's long running or short running depending on which the pool can make smarter decisions), and the default number of threads being set tox * number of logical cores
.Potential direction
30 seconds
as the default keep alive timeout.16 * number of logical cores
for the maximum number of backup threads that are used by the blocking function? The fixed 100 threads doesn't seem to be a great default, on say single core systems or low memory embedded systems, or on server systems with a massive amount of cores. While a multiplier of 8-16, I think would make a good reasonable default for maximum blocking threads2 * cores
to provide room for some APIs like this future-lock: Mutex Line:160 that unfortunately does seem lock for extremely short period of time. While it would be ideal for us to not have these, these do exist, and quite possibly will continue to for a long time. Expanding the max pool gives some room here.2x
multiplier can then moved to be4x
and have that been cleaned up andkeep-alive
duration.Scenarios where threads have to be retained, they can always configure it to be so. This way, it's more atuned to a dynamic workload than a static one - which I'd think fits for a good number of web scenarios.
The text was updated successfully, but these errors were encountered: