Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for a default thread per core #212

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

tomyan
Copy link

@tomyan tomyan commented May 22, 2017

This adds a thread local variable that is lazily initialized to a
default core and handle for that core. It also proposes three api
methods for interacting with that core:

// get a mutable reference to the core in a callback
tokio::reactor::with_default_core(|core| ... );

// run the core (in this case indefinitely)
tokio::reactor::run_default(future::empty());

// get a handle to the core - in this case for setting a timeout
let timeout = tokio::reactor::Timeout::new(
    Duration::from_millis(100),
    &tokio::reactor::default_handle()
);

The purpose of this proposed change is to make it so that libraries
based on tokio can use this by default to make the dominant use-case
(one event loop per thread) more ergonomic - something like:

// imagined api
my_network_service.init(None);
tokio::reactor::run_default(future::empty());

// or using a non-default core:
my_network_service.init(Some(core.handle()));
core.run(future::empty())

This adds a thread local variable that is lazily initialized to a
default core and handle for that core. It also proposes three api
methods for interacting with that core:

    // get a mutable reference to the core in a callback
    tokio::reactor::with_default_core(|core| ... );

    // run the core (in this case indefinitely)
    tokio::reactor::run_default(future::empty());

    // get a handle to the core - in this case for setting a timeout
    let timeout = tokio::reactor::Timeout::new(
        Duration::from_millis(100),
        &tokio::reactor::default_handle()
    );

The purpose of this proposed change is to make it so that libraries
based on tokio can use this by default to make the dominant use-case
(one event loop per thread) more ergonomic - something like:

    // imagined api
    my_network_service.init(None);
    tokio::reactor::run_default(future::empty());

    // or using a non-default core:
    my_network_service.init(Some(core.handle()));
    core.run(future::empty())
@alexcrichton
Copy link
Contributor

Thanks for the PR! I think we're not quite ready to merge this just yet right now though. We've discussed in the past what to do here with thread-local event loops and such but it's unfortunately been somewhat inconclusive up to this point. It's certainly nice to have in some situations but also isn't always what you want as there tend to be lots of hidden gotchas (e.g. calling run_default inside of run_default in this PR panics).

We've mostly consider punting this to 0.2 for now while we tackle more pressing design questions, but this is something we'd definitely like to revisit before the next major version!

@tomyan
Copy link
Author

tomyan commented May 24, 2017

I've no particular timeframe in mind, just trying to propose something useful - please feel free to ignore until it makes sense to consider again. With that said...

Trying to run the default loop while it is already running indicates a bug in the calling code, right? If so then is it better to panic (maybe with a more helpful message), or is it better to return a Result allowing the user to handle the error?

@tomyan
Copy link
Author

tomyan commented May 24, 2017

Having considered this some more, I'm not sure it's a good idea after all (probably way behind you here). I've been having fun with the server I'm building on hyper with lifetimes - I think making sure that the events being handled by the event loop outlive the event loop, since rust won't let the core dispatch events that don't exist any more (something like that anyway). Anyway, making it so the core and handle have a static lifetime within the thread seems like it will just compound the problem. Would be interested in your thoughts on this, but if you agree please feel free to close the PR.

Thanks

Tom

@alexcrichton
Copy link
Contributor

Nah I think having some "official" solution along these lines is a good idea, there's quite a few contexts this would be useful in! I think though that this may require changes to the core API maybe or something like that along the way to ensure it integrates well with the rest of the system. We've got a lot of 0.2-tagged issues and PRs so no worries leaving it open :)

When we try to use the core but it is already borrowed (most likely
because the event loop is already running), give a more helpful error
message. This still occurs at runtime unfortunately, but this will at
least help to identify the cause of the problem.
@tomyan
Copy link
Author

tomyan commented May 26, 2017

Finally got to the bottom of what wasn't working and it wasn't what I thought it was at all. I have a copy of the proposed API in my project and it seems to be working okay. I've pushed a small change for the error case you mentioned to improve the error message, but it's still a panic. Let me know if you think this can be improved on.

@mehcode
Copy link

mehcode commented Aug 6, 2017

I really love the idea of a default, thread-local Core.


I have a (somewhat) novel addition though.

  • Add a Drop impl to Future that would schedule the future on the default, thread-local event loop. This drastically simplifies understanding of the common case.

See (adjusted from https://github.com/hyperium/hyper/blob/master/examples/client.rs).

fn main() {
  let work = make_request();
  // dropped so work is now ran and blocks return of this function
}

#[async]
fn make_request() {
    let client = Client::new();
    let res = await!(client.get("https://www.google.com".parse()?));
    
    println!("Response: {}", res.status());
    println!("Headers: \n{}", res.headers());

    res.body().for_each(|chunk| {
        io::stdout().write_all(&chunk).map_err(From::from)
    })
}

All boilerplate goes away completely. This sort of pseudo merge of async -> sync is very common when working with async code.

@dwrensha
Copy link
Contributor

dwrensha commented Aug 6, 2017

Add a Drop impl to Future that would schedule the future on the default, thread-local event loop. This drastically simplifies understanding of the common case.

Currently, dropping a future cancels the action it represents, and many applications rely on these semantics. How would cancellation be supported with the addition you're imagining?

@mehcode
Copy link

mehcode commented Aug 6, 2017

I'd say that is very unintuitive and doesn't seem to be documented anywhere (but I may have missed it).

No, I think I understand what you meant. It doesn't cancel it, it just doesn't run it.

From what I can tell, Future still needs a cancellation method or token solution in order to cancel in-progress actions.

If we decided on some_future.cancel() for that it'd make sense for it to prevent automatic scheduling as well.

As for libraries/applications relying on this.. it sounds like you're implying that they are creating futures to then immediately not execute them and ignore them. I feel that should be linted against at the very least as it feels like an error.

Now, I don't have a ton of experience with futures in rust so I may be misunderstanding the situation completely. I'm coming from node.js' understanding of promises.

@alexcrichton
Copy link
Contributor

@mehcode ah yeah so this is where Rust can really show its strength with owned values and deterministic destruction. Having Drop for a future imply cancellation has been one of the single largest benefits of using Rust in the asynchronous space I think. For that reason I don't think we're ever going to change that.

That being said, though, I don't think that this means that everything has to be unergonomic. For example I'd imagine you're example in a "perfect world" looking like:

fn main() {
    let work = make_request().wait().unwrap();
}

#[async]
fn make_request() -> Result<(), hyper::Error> {
    let client = Client::new();
    let res = await!(client.get("https://www.google.com".parse()?))?;
    
    println!("Response: {}", res.status());
    println!("Headers: \n{}", res.headers());

    #[async]
    for chunk in res.body() {
        io::stdout().write_all(&chunk)?;
    }
    Ok(())
}

Note that Handle is gone, Core is gone, etc. The wait is an explicit way to say "block this thread witing for that result"

@mehcode
Copy link

mehcode commented Aug 7, 2017

@alexcrichton I guess I'm just having a disconnect from coming from a land of eager futures vs lazy futures. I can see that there is a is a strong advantage to lazy and I didn't realize the example I had would still be awesome with the changes in this PR without my addition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants