-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate performance regression from tokio channel upgrade #6043
Comments
cc @ktff and @lukesteensen. I'm curious if we should work to optimize the existing code or consider a revert. |
Ran the rest of the test harness tests against this commit, there are a few that show regressions: |
I think it's better to optimize. If the channels are indeed slower, it will be easier to migrate to a different futures 0.3/async channel than from what was before, and so that #5839 can move forward. An alternative would be to revert, but to still investigate and optimize the current implementation since we will need to transition to it at some point. And regarding regressions, it seams most of the slowdown comes from greater write to disk and resulting i/o contention therefor usual increase in |
Sounds good, I've assigned this to you. @jszwedko has been working on our benchmarking suite and we should use that to test for this. |
It turned out that
That's on top of fixing this regression. So let's go with Notes
|
cc @LucioFranco since he once said nothing was faster than Tokio 😁 |
It appears that #5868 may have introduced a performance regression. We should investigate and/or decide if it is worth the trade-off.
Specifically, I saw performance regress for the test-harness real_world_1_performance/default benchmark of around 7%:
In criterion, I see a regression in some of the topology benchmarks, but an improvement in others:
Here
complex/complex
,transforms/transforms
,pipe/pipe_simple
, andpipe/pipe_multiple_writers
show a regression, but the others show an improvement.The text was updated successfully, but these errors were encountered: