-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[sled-agent] Don't block InstanceManager on full MPSCs #6913
Conversation
Sled-agent's `InstanceManager` task is responsible for managing the table of all instances presently running on the sled. When the sled-agent receives a request relating to an individual instance on the sled, it's sent to the `InstanceManager` over a `tokio::sync::mpsc` channel, and is then dispatched by the `InstanceManager` to the `InstanceRunner` task responsible for that individual instance by sending it over a *second* `tokio::sync::mpsc` channel. This is where things start to get interesting.[^1] `tokio::sync::mpsc` is a *bounded* channel: there is a maximum number of messages which may be queued by a given MPSC channel at any given time. The `mpsc::Sender::send` method is an `async fn`, and if the channel is at capacity, that method will _wait_ until there is once again space in the channel to send the message being sent. Presently, `mpsc::Sender::send` is called by the `InstanceManager`'s main run loop when dispatching a request to an individual instance. As you may have already started to piece together, this means that if a given `InstanceRunner` task is not able to process requests fast enough to drain its channel, the entire `InstanceManager` loop will wait when dispatching a request to that instance until the queue has been drained. This means that if one instance's runner task has gotten stuck on something, like waiting for a Crucible flush that will never complete (as seen in #6911), that instance will prevent requests being dispatched to *any other instance* managed by the sled-agent. This is quite unfortunate! This commit fixes this behavior by changing the functions that send requests to an individual instance's task to instead *shed load* when that instance's request queue is full. We now use the `mpsc::Sender::try_send` method, rather than `mpsc::Sender::send`, which does not wait and instead immediately returns an error when the channel is full. This allows the `InstanceManager` to instead return an error to the client indicating the channel is full, and move on to processing requests to other instances which may not be stuck. Thus, a single stuck instance can no longer block requests from being dispatched to other, perfectly fine instances. The error returned when the channel is at capacity is converted to an HTTP 503 Service Unavailable error by the API. This indicates to the client that their request to that instance was not able to be processed at this time, but that it may be processed successfully in the future.[^2] Now, we can shed load while allowing clients to retry later, which seems much better than the present situation. [^1]: In the sense of "may you live in interesting times", naturally. [^2]: I also considered returning 429 Too Many Requests here, but my understanding is that that status code is supposed to indicate that too many requests have been received from *that specific client*. In this case, we haven't hit a per-client rate limit; we're just overloaded by requests more broadly, so it's not that particular client's fault.
Since none of this code is really meaningfully exercised by the Omicron test suite (due to the pervasive use of the sled-agent simulator), I'd like to do some manual testing of this change (maybe using This should hopefully fix the highest-level symptoms described in #6911, but I'm not sure if I want to have this PR close that issue, since the root cause (Crucible getting stuck when trying to perform a snapshot after a disk has disappeared) still needs to be addressed. Also, a potential follow-up from this that I'd like to address is changing how instances are terminated, so that there's a separate mechanism for terminating an instance that takes priority over whatever else it might be doing, which would have better addressed the specific issue in #6911 where a request to expunge the failed disk could not proceed because the instance's task had gotten wedged waiting for the Crucible snapshot to finish, which was unable to complete...because of the bad disk. |
Hi @smklein! :) |
@@ -1092,13 +1150,48 @@ fn propolis_error_code( | |||
} | |||
|
|||
/// Describes a single Propolis server that incarnates a specific instance. | |||
#[derive(Clone)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intent behind deriving Clone
for this is to allow the instance's handle to be cloned into tasks spawned by the InstanceManager
for the functions that need to send multiple requests to the instance (or requests to multiple instances) rather than just passing in the tx
from the original request (e.g. use_only_these_disks
). I didn't actually do that in this PR but intend to do so in a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as a tiny nitpick I might punt on deriving clone until that PR, because it doesn't seem necessary in this one.
(but also, not a big deal if this is a pain in the butt)
.map_err(Error::from) | ||
} | ||
|
||
/// Create a zone bundle from a named instance zone, if it exists. | ||
async fn create_zone_bundle( | ||
fn create_zone_bundle( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While working on this change, I noticed that the error handling in the rest of this function is a little bit off. We presently return errors from create_zone_bundle
if the provided zone name is invalid, or if there's no instance for that UUID. These errors are bubbled up into the InstanceManager
run loop, which logs an error...but the response tx
oneshot just gets dropped, rather than receiving these errors.
This means that the client gets a generic "response dropped" error rather than the more useful error indicating that the zone name is bad or doesn't exist. We should change this function so the errors are sent back over the channel instead, but I felt like that was better done in a separate PR from the rest of this change, for Git hustory reasons
let (tx, rx) = oneshot::channel(); | ||
// This will fail if the tx has been dropped, which we just...don't do. | ||
let _ = instance.get_filesystem_zpool(tx); | ||
let Ok(Ok(Some(filesystem_pool))) = rx.await else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and in "instance.terminate" below are two spots where we still be seem to block on a per-instance basis.
I don't think that needs to block this change -- it's still a move in the right direction -- but "avoid .await-ing on a single instance" seems like necessary follow-up, otherwise we could still hit the #6911 issue, right?
(I see you have the TODO
here, but just confirming that this also applies to .terminate
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's correct. I was planning to do that in another branch, but I'm happy to do all of that here, if you prefer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to punt it to another PR, if we're doing it somewhere!
// TODO(eliza): clone each instance and spawn a task to handle it, | ||
// so that a single misbehaving instance cannot block the instance | ||
// manager's run loop... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be a better comment on the follow-up PR, but I'm having trouble envisioning how this will work. Today this routine won't return until it's disposed of every VMM in jobs
. Delegating to individual tasks will allow this loop to move on to consider VMM Y even if requests to VMM X aren't currently being serviced, but won't this routine still need to join all of the tasks at the end?
Not a blocker for this PR; just trying to dispel the "what am I missing?" feeling :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was imagining that the instances that must be disposed of would be cloned into a task that would also own the request and it would just go and do the whole thing and then complete the request when it finishes, so that instances which don't need to be destroyed can go on handling requests to do other stuff whilst the disk expungement is being handled. This is in contrast to forcing requests to instances which know nothing about the expunged disk to wait until the expungement has finished. Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does, thanks!
The bit I was headed toward is that this ultimately means it will be possible to have an instance manager servicing two separate "use only these disks" commands at once, which is new behavior. That may be fine (I haven't thought about it) but I wanted to make sure I understood what was happening before actually raising it as something worth considering.
Ran
which...I'll have to go look into what that actually means --- I'm not sure if that's indicative of an internal bug in |
Incidentally, the CI failure on c6035fdd788766e6762799aa812d9ac1eab02be6 sure looks like the same Oximeter re-registration issue as #6895/#6916, although it looks like it happened while running a different test: https://buildomat.eng.oxide.computer/wg/0/details/01JAXAGXWPE822857D5D1DP0N4/DMz6rNUPg3AtDtKClnW7rWsxzHSSFztK1IEKxJe1w21To9FJ/01JAXAH9TM5NJ0PDYF6RSJXV5H#S5672 I'm going to re-run that and figure out if it's sticky. |
ah, cool, it looks like "invalid actor state" has a more detailed error message string inside it, which the I'll fix that and hopefully it'll tell us what's actually going on. |
aha:
and if I grep for everything that happened to
I need to think through what this actually means... |
Okay, update on this, looks like instances are also going to |
Hahahahha, when I reran that job, it failed with a similar Oximeter timeout in another Oximeter test ( I'm still pretty sure that's not my fault... |
Argh, an entirely new CI failure appears: https://github.com/oxidecomputer/omicron/pull/6913/checks?check_run_id=32025709910 I really hope this one isn't my fault, but will have to dig a bit to find out: |
AHAHAHAHHAHAHA cool I just saw a new Oximeter related test flake: https://buildomat.eng.oxide.computer/wg/0/details/01JAZVE3ECZRGC5PTJZJB1KJ2N/hh7z1TRG37lRfhcfB0Fy098oj4khSbUKkzckUQR55kd8d4jk/01JAZVEN0WFKG1KABT2YD0M74P#S6000 |
Stacked on top of #6913 Presently, sled-agent sends requests to terminate an instance to the `InstanceRunner` task over the same `tokio::sync::mpsc` request channel as all other requests sent to that instance. This means that the `InstanceRunner` will attempt to terminate the instance only once other requests received before the termination request have been processed, and an instance cannot be terminated if its request channel has filled up. Similarly, if an instance's `InstanceRunner` task is waiting for an in-flight request to the VMM to complete, the request to terminate the instance will not be seen until the current request to Propolis has returned. This means that if the instance has gotten stuck for some reason --- e.g., because it is attempting a Crucible snapshot that cannot complete because a physical disk has gone missing, as seen in #6911 --- the instance cannot be terminated. Sadly, in this case, the only way to resolve the stuck request is to terminate the instance, but we cannot do so *because* the instance is stuck. This seems unfortunate: Ii we try to kill an instance because it's doing something that it will never be able to finish, it shouldn't be able to say "no, you can't kill me, I'm too *busy* to die!". Instead, requests to terminate the instance should be prioritized over other requests. This commit does that. Rather than sending termination requests to the `InstanceRunner` over the same channel as all other requests, we instead introduce a separate channel that's *just* for termination requests, which is preferred over the request channel in the biased `tokio::select!` in the `InstanceRunner` run loop. This means that a full request channel cannot stop a termination request from being sent. When a request to the VMM is in flight, the future that awaits that request's completion is now one branch of a similar `tokio::select!` with the termination channel. This way, if a termination request comes in while the `InstanceRunner` is awaiting an in-flight instance operation, it will still be notified immediately of the termination request, cancel whatever operation it's waiting for, and go ahead and terminate the VMM immediately. This is the correct behavior here, since the terminate operation is intended to forcefully terminate the VMM *now*, and is used internally for purposes such as `use_only_these_disks` killing instances that are using a no-longer-extant disk, or the control plane requesting that the sled-agent forcibly unregister the instance. "Normal" requests to stop the instance gracefully will go through the `instance_put_state` API instead, sending requests through the normal request channel and allowing in flight operations to complete.
Sled-agent's
InstanceManager
task is responsible for managing thetable of all instances presently running on the sled. When the
sled-agent receives a request relating to an individual instance on the
sled, it's sent to the
InstanceManager
over atokio::sync::mpsc
channel, and is then dispatched by the
InstanceManager
to theInstanceRunner
task responsible for that individual instance bysending it over a second
tokio::sync::mpsc
channel. This is wherethings start to get interesting.1
tokio::sync::mpsc
is a bounded channel: there is a maximum number ofmessages which may be queued by a given MPSC channel at any given time.
The
mpsc::Sender::send
method is anasync fn
, and if the channel isat capacity, that method will wait until there is once again space in
the channel to send the message being sent. Presently,
mpsc::Sender::send
is called by theInstanceManager
's main run loopwhen dispatching a request to an individual instance. As you may have
already started to piece together, this means that if a given
InstanceRunner
task is not able to process requests fast enough todrain its channel, the entire
InstanceManager
loop will wait whendispatching a request to that instance until the queue has been drained.
This means that if one instance's runner task has gotten stuck on
something, like waiting for a Crucible flush that will never complete
(as seen in #6911), that instance will prevent requests being dispatched
to any other instance managed by the sled-agent. This is quite
unfortunate!
This commit fixes this behavior by changing the functions that send
requests to an individual instance's task to instead shed load when
that instance's request queue is full. We now use the
mpsc::Sender::try_send
method, rather thanmpsc::Sender::send
, whichdoes not wait and instead immediately returns an error when the channel
is full. This allows the
InstanceManager
to instead return an error tothe client indicating the channel is full, and move on to processing
requests to other instances which may not be stuck. Thus, a single stuck
instance can no longer block requests from being dispatched to other,
perfectly fine instances.
The error returned when the channel is at capacity is converted to an
HTTP 503 Service Unavailable error by the API. This indicates to the
client that their request to that instance was not able to be processed
at this time, but that it may be processed successfully in the
future.2 Now, we can shed load while allowing clients to retry later,
which seems much better than the present situation.
Footnotes
In the sense of "may you live in interesting times", naturally. ↩
I also considered returning 429 Too Many Requests here, but my
understanding is that that status code is supposed to indicate that
too many requests have been received from that specific client. In
this case, we haven't hit a per-client rate limit; we're just
overloaded by requests more broadly, so it's not that particular
client's fault. ↩