-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleaning up the blocking module #112
Comments
Totally agree with your point, the copy-paste blocking module makes
This solution is good, we don't have generate code by hand, can't help to see it. And there is one more thing which we should keep eyes on, will state of blocking call become more complicated to maintain? After discussion in this PR #102, I come out with an intuition almost as same as this: fn with_block_on() -> Result<String, reqwest::Error> {
let mut rt = tokio::runtime::Builder::new()
.basic_scheduler()
.enable_all()
.build()
.unwrap();
rt.block_on(async move {
original().await
})
} And my problem is that is there more state-machine and error we have to take care of compared with
Yeah, but I think it's some kind of thing like:
|
I don't understand what you mean with that. The way I see it is that performance/complexity-wise it's basically the same as the copy paste version. I'll start working on this once #95, #110 and #113 are finished, but I still need to figure out how to only create a single global runtime and how to make it more performant. |
Yeap, I get your point
How about the Singleton design pattern, would it be helpful? |
I'm not a big fan of singletons but I guess it's the only way to do it in this case. Isn't what I suggested with |
I recommend using a shared runtime, but with a small change compared to what you have now:
Alternatively you could spawn the operation on the shared runtime and use I'm not sure which of the two are better. As slight variation on the above, you can use a basic scheduler, but spawn a thread that calls block on, waiting on e.g. a channel with jobs. This is what reqwest's blocking module does afaik. |
Thanks for the recommendations, @Darksonn! I'm extending the initial benchmark with all the possibilities. The benchmark obviously isn't reliable enough for the choice, but I think it gives an idea of how the performance is affected: main.rsuse std::time::Instant;
use std::sync::{Arc, Mutex};
use std::thread;
use lazy_static::lazy_static;
use tokio::runtime;
async fn original() -> Result<String, reqwest::Error> {
reqwest::get("https://www.rust-lang.org")
.await?
.text()
.await
}
fn with_copypaste() -> Result<String, reqwest::Error> {
reqwest::blocking::get("https://www.rust-lang.org")?
.text()
}
lazy_static! {
// Mutex to have mutable access and Arc so that it's thread-safe.
static ref RT_THREADING: Arc<Mutex<runtime::Runtime>> = Arc::new(Mutex::new(runtime::Builder::new()
.basic_scheduler()
.enable_all()
.build()
.unwrap()));
// Without mutexes, it should be used with `handle`.
static ref RT: runtime::Runtime = runtime::Runtime::new().unwrap();
}
fn with_block_on_local_runtime() -> Result<String, reqwest::Error> {
let mut rt = tokio::runtime::Builder::new()
.basic_scheduler()
.enable_all()
.build()
.unwrap();
rt.block_on(async move {
original().await
})
}
fn with_block_on_threaded_runtime() -> Result<String, reqwest::Error> {
RT_THREADING.lock().unwrap().block_on(async move {
original().await
})
}
fn with_block_on_handle() -> Result<String, reqwest::Error> {
RT.handle().block_on(async move {
original().await
})
}
fn with_block_on_spawn_handle() -> Result<String, reqwest::Error> {
RT.handle().block_on(async {
RT.spawn(async move {
original().await
}).await.unwrap()
})
}
fn main() {
macro_rules! benchmark {
($name:ident) => {
let mut total = 0;
let now = Instant::now();
print!("benchmarking {} single-threaded ...", stringify!($name));
for _ in 1..50 {
$name().unwrap();
}
total += now.elapsed().as_millis();
println!(" done in {}ms", now.elapsed().as_millis());
let now = Instant::now();
print!("benchmarking {} multi-threaded ...", stringify!($name));
// multi threaded
let mut handles = Vec::new();
for _ in 1..50 {
handles.push(thread::spawn(|| {
$name().unwrap();
}))
}
for handle in handles {
handle.join().unwrap();
}
total += now.elapsed().as_millis();
println!(" done in {}ms", now.elapsed().as_millis());
println!(">> total time taken: {}ms", total);
}
}
benchmark!(with_copypaste);
benchmark!(with_block_on_local_runtime);
benchmark!(with_block_on_threaded_runtime);
benchmark!(with_block_on_handle);
benchmark!(with_block_on_spawn_handle);
} Cargo.toml[package]
name = "benchmarks"
version = "0.1.0"
authors = ["Mario Ortiz Manero <marioortizmanero@gmail.com>"]
edition = "2018"
[dependencies]
reqwest = { version = "0.10.8", features = ["blocking"] }
tokio = { version = "0.2.22", features = ["full"] }
lazy_static = "1.4.0" Results on an Intel Pentium G4560 (4) @ 3.500GHz (which, to be fair, is a terrible CPU, specially when multithreading). It'd be a good idea to run the benchmark on more systems:
The |
Perhaps we could do that with the |
This is a continuation of #102, which discussed how to properly export the
blocking
module without copy-pasting the async functions and removingasync
andawait
. The current solution makes it painful to maintain rspotify, and compilation times when using the blocking client are huge because you're basically compiling rspotify twice, which kinda defeats the point of #108.I first thought that using an attribute macro could be a good idea because all that needs to be modified from the async implementations is removing the
async
andawait
keywords. See this comment for a demonstration. But this still doesn't fix the "compiling the codebase twice" issue, because the code would still be repeated, now after the macro is ran instead of manually. The macro in question would make compilation times even longer, and could be messy to implement. The only possible way to only "compile the codebase once" would be by having theblocking
feature only export the blocking interface when used, but I'm not sure we can assume that the users won't need both the async and blocking interfaces for a program.The second possible solution is calling the async implementations in the blocking functions using
block_on
. This might sound less efficient at first, but here's a comparison I made to prove my point:What?
block_on
was just as fast as the copypaste? Turns out that this is more or less whatreqwest
does with its ownblocking
module!. So both the copypaste and the runtime solutions are basically doing the same. This way we don't even need theblocking
feature inreqwest
, and rspotify will only be "compiled once".This solution could use a smaller macro that automatically generates the boilerplate
block_on
functions to avoid some repetition.The best part is that the example isn't even properly optimized. It generates a runtime every time the function is called. If it were global, it would only have to be initialized once. Here's a more complex and improvable (since it doesn't seem to be actually faster for some reason) version with
lazy_static
:The text was updated successfully, but these errors were encountered: