Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update to tokio v0.3 #473

Closed
cssivision opened this issue Oct 16, 2020 · 1 comment · Fixed by #476
Closed

update to tokio v0.3 #473

cssivision opened this issue Oct 16, 2020 · 1 comment · Fixed by #476
Assignees

Comments

@cssivision
Copy link

No description provided.

@LucioFranco
Copy link
Member

This work is currently in-progress by me.

@LucioFranco LucioFranco self-assigned this Oct 16, 2020
hawkw added a commit that referenced this issue Oct 27, 2020
This branch updates Tower to Tokio 0.3.

Unlike  #474, this branch uses Tokio 0.3's synchronization primitives,
rather than continuing to depend on Tokio 0.2. I think that we ought to
try to use Tokio 0.3's channels whenever feasible, because the 0.2
channels have pathological memory usage patterns in some cases (see
tokio-rs/tokio#2637). @LucioFranco let me know what you think of the
approach used here and we can compare notes!

For the most part, this was a pretty mechanical change: updating
versions in Cargo.toml, tracking feature flag changes, renaming
`tokio::time::delay` to `sleep`, and so on. Tokio's channel receivers
also lost their `poll_recv` methods, but we can easily replicate that by
enabling the `"stream"` feature and using `poll_next` instead.

The one actually significant change is that `tokio::sync::mpsc::Sender`
lost its `poll_ready` method, which impacts the way `tower::buffer` is
implemeted. When the buffer's channel is full, we want to exert
backpressure in `poll_ready`, so that callers such as load balancers
could choose to call another service rather than waiting for buffer
capacity. Previously, we did this by calling `poll_ready` on the
underlying channel sender.

Unfortunately, this can't be done easily using Tokio 0.3's bounded MPSC
channel, because it no longer exposes a polling-based interface, only an
`async fn ready`, which borrows the sender. Therefore, we implement our
own bounded MPSC on top of the unbounded channel, using a semaphore to
limit how many items are in the channel.

I factored out the code for polling a semaphore acquire future from
`limit::concurrency` into its own module, and reused it in `Buffer`.

Additionally, the buffer tests needed to be updated, because they
currently don't actually poll the buffer service before calling it. This
violates the `Service` contract, and the new code actually fails as a
result.

Closes #473 
Closes #474

Co-authored-by: Lucio Franco <luciofranco14@gmail.com>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants