-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mpsc stream is currently a pessimization #44512
Comments
Holy cow this is some awesome and thorough investigation, thanks so much @JLockerman! These would certainly be some more-than-welcome improvments and everything here makes sense to me. Would you be interested in sending a PR? |
Thanks 😊 . (I'm also planning to perform a similar analysis on MPSC shared sometime next month. |
Ok sure thing and no worries! I suspect Rust will for sure still be there on the 27th! |
Improve performance of spsc_queue and stream. This PR makes two main changes: 1. It switches the `spsc_queue` node caching strategy from keeping a shared counter of the number of nodes in the cache to keeping a consumer only counter of the number of node eligible to be cached. 2. It separates the consumer and producers fields of `spsc_queue` and `stream` into a producer cache line and consumer cache line. Overall, it speeds up `mpsc` in `spsc` mode by 2-10x. Variance is higher than I'd like (that 2-10x speedup is on one benchmark), I believe this is due to the drop check in `send` (`fn stream::Queue::send:107`). I think this check can be combined with the sleep detection code into a version which only uses 1 shared variable, and only one atomic access per `send`, but I haven't looked through the select implementation enough to be sure. The code currently assumes a cache line size of 64 bytes. I added a CacheAligned newtype in `mpsc` which I expect to reuse for `shared`. It doesn't really belong there, it would probably be best put in `core::sync::atomic`, but putting it in `core` would involve making it public, which I thought would require an RFC. Benchmark runner is [here](https://github.com/JLockerman/queues/tree/3eca46279c53eb75833c5ecd416de2ac220bd022/shootout), benchmarks [here](https://github.com/JLockerman/queues/blob/3eca46279c53eb75833c5ecd416de2ac220bd022/queue_bench/src/lib.rs#L170-L293). Fixes #44512.
Improve performance of spsc_queue and stream. This PR makes two main changes: 1. It switches the `spsc_queue` node caching strategy from keeping a shared counter of the number of nodes in the cache to keeping a consumer only counter of the number of node eligible to be cached. 2. It separates the consumer and producers fields of `spsc_queue` and `stream` into a producer cache line and consumer cache line. Overall, it speeds up `mpsc` in `spsc` mode by 2-10x. Variance is higher than I'd like (that 2-10x speedup is on one benchmark), I believe this is due to the drop check in `send` (`fn stream::Queue::send:107`). I think this check can be combined with the sleep detection code into a version which only uses 1 shared variable, and only one atomic access per `send`, but I haven't looked through the select implementation enough to be sure. The code currently assumes a cache line size of 64 bytes. I added a CacheAligned newtype in `mpsc` which I expect to reuse for `shared`. It doesn't really belong there, it would probably be best put in `core::sync::atomic`, but putting it in `core` would involve making it public, which I thought would require an RFC. Benchmark runner is [here](https://github.com/JLockerman/queues/tree/3eca46279c53eb75833c5ecd416de2ac220bd022/shootout), benchmarks [here](https://github.com/JLockerman/queues/blob/3eca46279c53eb75833c5ecd416de2ac220bd022/queue_bench/src/lib.rs#L170-L293). Fixes #44512.
TLDR
As of
rustc 1.22.0-nightly (981ce7d8d 2017-09-03)
cloning ampsc::Sender
speeds up spsc workloads. My benchmarking seems to indicate that this is due to:Contention on shared counters in the node cache (push, pop).
False sharing between the producer and consumer fields in the underlying queue.
False sharing or contention in the wakeup code.
2 can be fixed by simply changing the alignment of the offending members so that the producer and consumer parts are on separate cache lines. 1 can be fixed with a small rewrite so that the queue only tracks its cache size at the consumer (a version of this code can be found here). 3 can be mitigated by reworking the alignment, but I am not sure if it's a full fix, rewriting the the send and recv logic so that no counter may also fix the issue (a version of the code that does this can be found (here and here), but it is not complete).
In the following benchmark:
(derived from crossbeam, full code can be found in this repo)
on an early 2014 Intel i5 running macos, using
rustc 1.22.0-nightly (981ce7d8d 2017-09-03)
orrustc 1.20.0 (f3d6973f4 2017-08-27)
,stream
runs at roughly201 ns/send
whileshared
runs at134 ns/send
.Running on linux on EC2 and a raspberry pi show similar behavior.
The underlying datastructures show some difference in performance (
spsc queue 75 ns/send
,mpsc 59 ns/send
), though not enough to fully explain the difference. Though I have not yet looked enough into thempsc
code enough to be sure of the difference, I did find potential improvements forspsc
:ns/send
ns/send
ns/send
ns/send
ns/send
ns/send
ns/send
ns/send
Where Unaligned is the current struct layout, Cache Aligned aligns the consumer fields and producer fields to their own cache line (code is here).
Default cache is the current node cache implementation in std, No Cache disables the Node cache entirely (producer code, consumer code), and Unbounded Cache should be self explanatory.
Low Contention Cache rewrites the cache bounding logic to be done entirely on the consumer side.
Instead of keeping a count of how many nodes are in the cache, it keeps track of how many nodes
are marked as eligible to be cached, and only and always caches those nodes so marked (code for this can be found here, my experimental implementation stores the eligible flag in the node, though it could also be done by stealing a bit from the pointer).
Some of these performance improvements translate to the full
stream
structure:ns/send
ns/send
ns/send
ns/send
ns/send
ns/send
But to fully see the benefits, contention with the wakeup state needs to be removed
ns/send
ns/send
ns/send
ns/send
ns/send
ns/send
(The numbers were collected from a version of stream that does not a counter at all I've gotten similar numbers from simply putting every field in stream on their own cache line. I think there should be a layout which uses exactly 2 lines, one for the producer and one for the consumer, with similar performance, but I have not done enough benchmarking to confirm it yet).
All the code code to reproduce these numbers can be found in this repo, along with the number for a raspberry pi.
Note that the raspberry pi seemed to be undergoing throttling as the benchmark ran, so numbers gathered later in the run are significantly worse than the one gathered at the beginning.
My apologies for the length and quality of writing.
cc @alexcrichton I believe.
The text was updated successfully, but these errors were encountered: