-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choose between backpressure and load shedding for each service #1618
Comments
9 tasks
teor2345
added a commit
to teor2345/zebra
that referenced
this issue
Jan 22, 2021
Uses the "load shed directly" design pattern from ZcashFoundation#1618.
2 tasks
teor2345
added a commit
to teor2345/zebra
that referenced
this issue
Jan 22, 2021
Uses the "load shed directly" design pattern from ZcashFoundation#1618.
2 tasks
teor2345
added a commit
that referenced
this issue
Jan 29, 2021
Uses the "load shed directly" design pattern from #1618.
This was referenced Feb 3, 2021
We should fix issues with individual services as they come up. |
6 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Motivation
In #1593, we reviewed the
poll_ready
implementations in Zebra, but kept the backpressure implementations mostly the same.Incorrect backpressure implementations can impact resource usage and cause hangs, so we should make sure that Zebra transmits backpressure correctly.
Constraints
The constraints imposed by the
tower::Buffer
andtower::Batch
implementations are:poll_ready
must be called at least once for eachcall
Poll::Ready
from a buffer, regardless of the current readiness of the buffer or its underlying serviceBuffer
/Batch
capacity limits the number of concurrently waiting tasks. Once this limit is reached, further tasks will block, awaiting a free reservation.Buffer
/Batch
capacity must be larger than the maximum number of concurrently waiting tasks, or Zebra could deadlock (hang).ready!
macro, rather than polling them all simultaneouslyFor example, see #1735:
Design Tasks
Develop consistent design patterns for Zebra backpressure.
These patterns could include:
The design should include advice on when to use each design pattern.
Complex Design Constraints: Inbound Service
The
Inbound
service currently doesn't transmit backpressure.Ideally, we'd like to transmit backpressure when:
We want to drop old requests when the download queue is full, because the latest requests are most likely to be new blocks we haven't seen yet, so we don't want to buffer any download requests. Since we want recent gossiped blocks, we might want to:
Rejected Designs
We don't want to propagate backpressure in
Inbound::poll_ready
, because we'd be holding a state buffer slot for every concurrent request, even if the request didn't use the state.We don't want to generate a
Poll::Pending
if the downloader is full, because most requests don't use the downloader. (And we'd have to wake the pending task when the downloader emptied.)Here's a backpressure design for the address book:
Inbound::call
, so theInbound
buffer fills up if the address book is locked or heavily contendedBut I'm not sure if there's an API that does the same thing for the state service:
Inbound::call
, so theInbound
buffer fills up if the state is busyPotential Design
We might want to turn
Inbound
requests into aStream
, so they hold slots until the future resolves to a result. (Rather thanBuffer
s, which hold slots until the future is created and returned.) That way, we can drop new requests when theInbound
stream is full, transmitting backpressure.Implementation Tasks
Review:
Service
andStream
Service
s andStream
sand make sure the code uses one of the Zebra backpressure design patterns.
In particular, we should search for the following strings:
poll_ready
forService
sready_and
forServices
poll_next
forStream
sService
orServiceExt
functions that callpoll_ready
Stream
orStreamExt
functions that callpoll_next
The text was updated successfully, but these errors were encountered: