-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace tokio-timer with futures-timer for higher timer resolution #5
Conversation
Cargo.toml
Outdated
@@ -9,7 +9,7 @@ edition = "2018" | |||
[dependencies] | |||
tokio = "0.1.14" | |||
futures = "0.1.25" | |||
tokio-timer = "0.2.8" | |||
futures-timer = "0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any higher version requires changing from futures::Future
to std::future::Future
.
(1) Why (2) Is there any problem with the (3) What's the resolution of For my use case on batching syscall, replacing the timer to |
Thanks for the summary.
I'd be interested in a more detailed explanation on this
Glad you found a solution. If you'd like to continue working on your WIP, you'd be more than welcome to do so. A higher-resolution timer might be beneficial in many other cases. |
Later I will post more details on the performance problem of the high-resolution timer with a low timeout.
I used the modified
I will write some sample codes to reproduce that.
This may be a common problem for any coroutine runtime. We also encountered this problem before in the Golang runtime. Low timeout value will trigger lots of wake-up operations, resulting in a bunch of |
You've been running the benchmarks in release mode or? (Just to be sure. 😉) Might be wrong, but I think Thanks for your investigation. Would be really curious about your writeup. You could also raise this on the user forum or the subreddit if you like. |
Yes, it's in release mode.
Seems like in |
TODO:
|
It seems like in the new Here's an example: tokio-batch-benchmark. Build the example
Benchmark without batching
Benchmark with just one timer driven by futures-timer
Benchmark with two timers, a light-weight timer and a timer driven by futures-timer
Adding the light-weight timer has not significant difference. By the way:
UPDATE:
|
Wow. That's great news! Looking forward to review and merge this. I'm at RustFest at the moment so it'll take some time until I get to it but I'll try my best to get it in. |
For comparison, here are my numbers:
(Macbook Pro Mid 2015, 2.2 GHz Intel Core i7, 16 GB RAM) |
How do you run the
Have you pulled the latest codes of tokio-batch-benchmark? Try running this to check whether there is any error message:
I will also check it on other computers. |
Yup, just like you described:
(In all three cases.) |
Oh, my fault, it's
It needs a higher throughput to see the difference. |
Ah right. Makes more sense now; although I'm confused about
|
|
But why is |
Well, if we set the timeout of batching to a large number, it will be slower. But here the timeout is Let's ignore the light-weight timer case first. Using The key point of batching syscalls for optimization is that sending more data in a syscall does not significantly increase the latency of syscalls. Sending 16 bytes spends almost the same time as sending 8 bytes. So the more data packed in every On OSX, we can use Nobatch caseWe can collect the number of syscalls and the data sent by
Within 10 seconds, it triggered 1454801 We can also collect the latency of syscall:
Most Batching with timersCollect the data for the case that using
Within 10 seconds, we have fewer syscalls, 341701 compared to the previous 1454801, but have more throughput, 76533848 compared to 10184009. Since the length of the network packet of Redis PING is fixed, we can see the same scale as the QPS we saw in And the latency of
The flamegraph I generated in my Linux test environment shows that 88% of the time in this fake Redis server is spent on the syscall. It keeps firing the |
That makes a lot of sense. Thanks for the very detailed explanation and for creating this pull request in general. I'll merge it now. 🎉 Will publish a new version right after, but we should also bump the version once your fix for |
Thanks for creating this crate! This is very helpful for optimizing most network services. I'm so glad to work on it. |
Current timer resolution is low for some use cases: #4
This PR replaces the
tokio-timer
withfutures-timer
for higher resolution.I still need to compare their implementation to be clear about the following questions:
tokio-timer
does not support higher resolution timer?futures-timer
for higher resolution thantokio-timer
?futures-timer
?