-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[nexus] Managing local rack -> managing local fleet #1276
Labels
customer
For any bug reports or feature requests tied to customer requests
Comments
smklein
added a commit
that referenced
this issue
Dec 19, 2022
## Before this PR - IP Pools could exist in at most one project. IP allocation during instance creation occurred by [either by requesting an IP pool belonging to a project, or by "just looking for any unreserved IP Pool"](https://github.com/oxidecomputer/omicron/blob/79765a4e3b39a29bc9940c0e4a49c4364fbcc9e3/nexus/src/db/queries/external_ip.rs#L186-L212). As discussed in #2055 , our intention is for IP pools to be used across multiple projects, and for projects to be able to use multiple IP pools. - "Service" IP pools were indexed by rack ID, though (as documented in #1276 ), they should probably be accessed by AZ instead. ## This PR - Adds a default IP pool named `default`, which is used for address allocation unless a more specific IP pool is provided - Removes "project ID" from IP pools (and external IP addresses) - Removes "rack ID" from IP pool API and DB representation ## In the future - This PR doesn't provide the many-to-many connection between projects and IP pools that we eventually want, where projects can be configured to use different IP pools for different purposes. However, by removing the not-quite-accurate relationship that an IP pool must belong to a *single* project, the API moves closer towards this direction. - We probably should access the `service_ip_pool` API with the AZ UUID used for the query, but since AZs don't exist in the API yet, this has been omitted. Part of #2055
leftwo
pushed a commit
that referenced
this issue
Apr 30, 2024
Propolis: Update oximeter dependency to pull in automatic producer registration (#689) Propagate ReplaceResult up; return disk status (#687) Enable clippy warnings for lossless casts Update rustls deps for CVE-2024-32650 migration: refrain from offering all pages when possible (#682) Crucible: DTrace probes for IO on/off the network (#1284) Update oximeter dep to pull in automatic producer registration (#1279) Remove `ReadResponse` in favor of `RawReadResponse` (#1212) Fix typo in DTrace upstairs_info (#1276) replace needing no work should not be an error (#1275) Add some DTrace scripts to the package. (#1274) More Pantry updates for Region replacement (#1269) Send the correct task count for reconciliations (#1271) Raw extent cleanup (#1268)
leftwo
added a commit
that referenced
this issue
Apr 30, 2024
Propolis: Update oximeter dependency to pull in automatic producer registration (#689) Propagate ReplaceResult up; return disk status (#687) Enable clippy warnings for lossless casts Update rustls deps for CVE-2024-32650 migration: refrain from offering all pages when possible (#682) Crucible: DTrace probes for IO on/off the network (#1284) Update oximeter dep to pull in automatic producer registration (#1279) Remove `ReadResponse` in favor of `RawReadResponse` (#1212) Fix typo in DTrace upstairs_info (#1276) replace needing no work should not be an error (#1275) Add some DTrace scripts to the package. (#1274) More Pantry updates for Region replacement (#1269) Send the correct task count for reconciliations (#1271) Raw extent cleanup (#1268) --------- Co-authored-by: Alan Hanson <alan@oxide.computer>
twinfees
added
the
customer
For any bug reports or feature requests tied to customer requests
label
Nov 1, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Some operations within Nexus are implemented as "manage the state which may exist within my local rack". This includes:
However, longer-term, we would ideally migrate many of these operations to be "fleet-wide" instead of "rack-wide". This way, one nexus could control multiple racks simultaneously, ensure that CRDB nodes are distributed within an AZ, and ensure that service redundancy suffices for multi-rack failure scenarios.
For additional context, see: https://github.com/oxidecomputer/omicron/pull/1234/files/28d87f51ab88cce3d8ff2560a8996904e8c78f81#diff-5a93a4691987ea1b28d848375a2728abcb26cec85d477d051243cb1863198392
The text was updated successfully, but these errors were encountered: