-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Live attach/detach of external IPs #4694
Conversation
Still to solve: migration blocks, handling unexpt runstate change, preventing concurrent attach/detach of a given EIP. Takes the time to refactor the dpd_ensure and nat_removal so that we can target it to a single IP address on a given device.
...and now the cleanup + locking begins
Should close #4628.
a) needs further testing b) instance stop/delete need to be made state-aware
Still need to fixup start/stop/delete to block on ip-progress external IPs, but then we'll be sound!
Instance stop seems to do nothing, so that's fine -- probably need to make the attach/detach undo pass by failures to communicate with sled since we can't block it directly as-is. Delete is a bit trickier, need to see what Disk does.
Next up: putting into action my thoughts on improved idempotency.
This required that we drop the non-null parent constraint on ephemeral IPs, but I think that's worth it in the name of consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a look at most of the lifecycle interactions here. I think the interlocks around not proceeding with certain tasks if an IP is attaching/detaching mostly work, but in one case I think it's a very close shave, and I think we might need some extra checks for this logic to work correctly with live migration.
// This is performed so that an IP attach/detach will block the | ||
// instance_start saga. Return service unavailable to indicate | ||
// the request is retryable. | ||
if must_all_be_attached |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't looked closely at how the NAT RPW actually works, but I'm a little curious as to why we need this behavior. My mental model of that saga is that it maintains a view of the NAT entries that Dendrite should know about and ensures the various dpd
daemons in the system are told when that view changes. If I have an IP that's attached and an IP that's attaching at this point, could/would/should the attach saga just end up adding the entry for the IP that's still attaching at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re-ensuring an existing NAT entry for the RPW is a no-op, and should only cost a row lookup. So there's nothing wrong with always pushing all Attached
IPs apart from wasted effort.
We do still require an override for at least one other entry in an arbitrary state, because of saga unwind of attach/detach: an IP might be Detaching
, unwind when informing OPTE, and then need to be specifically readded to NAT. Similarly an Attaching
IP might unwind at the same step, and need to be removed from NAT. As a result, we don't want other executing sagas to try to do anything with IPs in a state they don't expect or IPs they don't 'own'. We'd also need to ensure that instance_start sees "all attached", and that attach/detach don't care about other IPs' states because they can safely run concurrently for different IP objects on the same instance.
Unrelated: Looking over this again I've also seen that ensure_nat_entry
is needlessly duplicated per IP, which is again a no-op. This is also indirectly duplicated today in main
(instance_start: for switch in boundary_switches {... instance_ensure_dpd_config ...}
-> for ip in ips { ... ensure_nat_entry ... }
), but we can fix this here now.
warn!( | ||
log, | ||
"siia_complete_attach: external IP was deleted or call was idempotent" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're perilously close to needing to unwind for this in at least the following case:
- instance is running
- attach saga starts, gets through
siia_get_instance_state
, and then stops - instance stops and gets deleted; instance delete saga detaches all the external IPs
- attach saga goes through
siia_nat
andsiia_update_opte
, then reaches this step instance_ip_move_state
returns false because the IP was already detached, but the changes pushed by the previous two steps don't get undone
I think that, in practice, we very narrowly avoid this because an instance can't be deleted unless it has no active VMM, and sled agent ensures that when a VMM stops, it (sled agent) removes the relevant instance from its instance manager before notifying Nexus that the VMM is gone. This means that in the case above, siia_update_opte
will fail and cause the saga to unwind.
Similarly, I think this just barely works in the following case:
- instance is running
- attach saga starts, gets all the way through
siia_update_opte
- instance stops: this calls
instance_delete_dpd_config
, which (fortunately for us) deletes all NAT entries for the IPs of interest even if they're not fully Attached - now it's safe for the saga not to unwind since instance stop already took care of all the interesting bits
I would consider at least leaving a comment in here about why we think it's safe to ignore the "IP was Detached when we went to move it to Attached" case, but I would also strongly consider making this case unwind if possible--this is all on a knife edge and a seemingly-safe change to, say, the order of operations in sled agent's VMM cleanup path could end up breaking this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had intentionally designed for Case 2, but had partly overlooked Case 1 (assuming that both are stop->delete) -- thanks a lot for catching that. I agree on that unwind pathway; although I'd (maybe incorrectly) assumed that the instance would be guaranteed to be removed from InstanceManager before moving from Stopping->Stopped
, since any OPTE endpoint on sled-agent will then give us Error::NoSuchInstance
.
I'd also be happier to unwind here rather than to push the limits of what we can get away with. Examining it a bit more closely however, there is another issue with the Case 1 race. The NAT entry will not be removed on unwind because of instance delete -- it's no longer attached to the instance, so instance_delete_dpd_config
will not see it. We need a tweak there to handle this and not wipe out a potential followup attach on another instance, whether we unwind or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree on that unwind pathway; although I'd (maybe incorrectly) assumed that the instance would be guaranteed to be removed from InstanceManager before moving from Stopping->Stopped, since any OPTE endpoint on sled-agent will then give us Error::NoSuchInstance.
Your assumption is correct--this does happen today when the VMM goes from Stopped -> Destroyed (which transition will also move its Instance from Running -> Stopped). But sled agent didn't previously guarantee this, so my fear is that someone (read: me) will inadvertently change this on the sled agent side in six months' time without considering the connection between these paths...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I've resolved this by unwinding if we observe a delete at the end of the attach saga. I've put in some extra work to clean up straggler NAT entries more precisely (needed in the delete-unwind case) and to make instance_ensure_dpd_config
more aggressive in pushing routes where an old conflict may exist. I think doing both of these means more useful buffers between us and catastrophe.
We still need to figure out the best possible `unwind` semantic at the end of unwind, then hopefully this is buttoned up.
This also includes some extra work designed to make NAT RPW rule management a little more robust in case the IP attach sagas leave behind any mess in event of concurrent stop or delete (sorry!).
@ahl @david-crespo On the API design front, I've written up some thoughts after the call on Tuesday into a strawman we can pick apart: https://gist.github.com/FelixMcFelix/18d20262a918ccf691a325a8d948379d. I'm less sure of the ephemeral side of things; if that needs more time in the oven, we can leave it out for now until arrive at a good conclusion there. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the new checks here--LGTM for the instance lifecycle pieces (I haven't looked at the external API bits at all).
if collection.runtime_state.migration_id.is_some() { | ||
return Err(Error::unavail(&format!( | ||
"tried to attach {kind} IP while instance was migrating: \ | ||
detach will be safe to retry once migrate completes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/detach/attach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the new doc comments in here!
@@ -746,26 +702,6 @@ async fn test_external_ip_attach_detach_fail_if_in_use_by_other( | |||
.parsed_body() | |||
.unwrap(); | |||
assert_eq!(error.message, "floating IP cannot be attached to one instance while still attached to another".to_string()); | |||
|
|||
// Detach in-use FIP from *other* instance should fail. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case shouldn't be hit by users anymore now that detach is targeted on the FIP itself rather than the pair (fip, instance)
.
Attach and detach APIs for floating/ephemeral have been retooled according to our last meeting -- split by ephemeral/floating, and floating IP control now lives on the FIP itself to support future network API developments. If there are naming tweaks needed then they shouldn't be too onerous to make. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't review implementation closely, but the APIs look good!
This PR adds new endpoints to attach and detach external IPs to/from an individual instance at runtime, when instances are either stopped or started. These new endpoints are:
/v1/floating-ips/{floating_ip}/attach
/v1/floating-ips/{floating_ip}/detach
/v1/instances/{instance}/external-ips/ephemeral
/v1/instances/{instance}/external-ips/ephemeral
These follow and enforce the same rules as external IPs registered during instance creation: at most one ephemeral IP, and at most 32 external IPs total.
/v1/floating-ips/{floating_ip}/attach
includes akind
field to account for future API resources which a FIP may be bound to -- such as internet gateways, load balancers, and services.Interaction with other instance lifecycle changes and sagas
Both external IP modify sagas begin with an atomic update to external IP attach state conditioned on$\mathit{state}\in[ \mathit{started},\mathit{stopped}]$ . As a result, we know that an external IP saga can only ever start before any other instance state change occurs. We then only need to think about how these other sagas/events must behave when called during an attach/detach, keeping in mind that these are worst-case orderings: attach/detach are likely to complete quickly.
Instance start & migrate
Both of these sagas alter an instance's functional sled ID, which controls whether NAT entry insertion and OPTE port state updates are performed. If an IP attach/detach is incomplete when either saga reaches
instance_ensure_dpd_config
orinstance_ensure_registered
(e.g., any IP associated with the target instance is in attaching/detaching state), the start/migrate will unwind with an HTTP 503.Generally, neither should undo in practice since IP attach/detach are fast operations -- particularly when an instance is formerly stopped. This is used solely to guarantee that only one saga is accessing a given external IP at a time, and that the update target remains unchanged.
Instance stop & delete
These operations are either not sagaized (stop), or cannot unwind (delete), and so we cannot block them using IP attach state. IP attach/detach will unwind if a given sled-agent is no longer responsible for an instance. Instance delete will force-detach IP addresses bound to an instance, and if this is seen then IP attach will deliberately unwind to potentially clean up NAT state. OPTE/DPD undo operations are best-effort in such a case to prevent stuck sagas.
Instance stop and IP attach may interleave such that the latter adds additional NAT entries after other network state is cleared. Because we cannot unwind in this case,
instance_ensure_dpd_config
will now attempt to remove leftover conflicting RPW entries if they are detected, since we know they are a deviation from intended state.Additional/supporting changes
NameOrId
, parameter names changed to match.views::ExternalIp
can now containFloatingIp
body.NameOrId
resolution for floating IP allocation pulled up fromDatastore
intoNexus
.Closes #4630 and #4628.