-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks #44
Comments
A few notes while chipping away at this... something might be of interest and inspire additional tests. IMHO, most of the request/response building is unimpressive (in a good way). Planning to look at concurrency/loads next and then, hopefully we can shape a pull request that provides solid regressions.
|
@blittable This looks great! |
Started a few e-2-e performance tests over here: https://github.com/blittable/tonic-perf One challenge... running a number of iterations around request/responses introduces a challenge with the client. The performance of the client - if the responses are awaited and handled (like with a println!) is ~32 seconds on 1000 iterations. A Node.js client clocks in at around 170 milli-seconds for the same calls against the same tonic server with the same protobuf files. If the same tonic/client code is handled by spawning threads for the client's invocations (and message creation) - it's around 74 micro-seconds for 1000 iterations. So, the question: what's wrong with this? // tokio::spawn(async move { <- Crazy fast
for _ in 0..1000_i32 {
let request = tonic::Request::new(HelloRequest {
name: "world".into(),
iteration: 1,
});
let resp = match client.say_hello(request).await {
Ok(resp) => println!("{:?}", resp.into_inner().message),
Err(e) => {
println!("Errant response; err = {:?}", e);
}
};
}
//}); 1 - Test server: https://github.com/blittable/tonic-perf/blob/cd8463ffe110ff3502b698b2d69dda5c1ca918fe/hello-tonic/src/hellotonic/server.rs#L38 2 - Test client: https://github.com/blittable/tonic-perf/blob/master/hello-tonic/src/hellotonic/client.rs 3 - Comparable node test: https://github.com/blittable/tonic-perf/tree/master/comparables/node/dynamic_codegen ( node greeter_client.js / tonic server) side note... Rustwide makes a nice reproducible env (easy to flip nightly/beta/stable win/linux / etc) for building/testing https://github.com/blittable/tonic-perf/tree/master/rustwide-build - but not so agreeable with distributed configurations. |
I believe the reason why the Also you probably want to create a future per request without awaiting each one and use something like Edit: Here is an example of waiting for multiple requests concurrently. I hope this was helpful! |
@blittable you can try something like: #[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = GreeterClient::connect("http://[::1]:50051")?;
let mut futures = FuturesUnordered::new();
for _ in 0..1000_i32 {
let request = tonic::Request::new(HelloRequest {
name: "world".into(),
});
let mut client = client.clone();
futures.push(async move { client.say_hello(request).await });
}
while let Some(res) = futures.next().await {
match res {
Ok(resp) => println!("{:?}", resp.into_inner().message),
Err(e) => {
println!("Errant response; err = {:?}", e);
}
}
}
Ok(())
} |
@alce Thanks for that and the route guide! The client's about 100% faster than the equivalent node.js version. |
It would be handy to add benchmarks mostly around encoding and decoding.
The text was updated successfully, but these errors were encountered: