-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Formalize update metadata #103
Comments
The most current example of what this metadata could include mentions objects containing
Some of the more obvious things to think about:
|
A loose example:
Splitting out architecture and version from the "flavor" lets the client filter through a list of these faster. Similarly with "images" we're likely to have a few distinct images required for an upgrade so it makes sense to name them. I like the wave example, although it would require making sure that the metadata-writer agrees with the clients about what the random seed range is lest all clients end up in the first wave for example. Maybe something like a "bound_max" field telling clients what the full range is and letting them scale themselves in the event of a mismatch avoids that, or maybe it's just easier to set it as a Thar-wide constant and never change it :) Specifying the latest version is a bit a interesting. Is it enough to mark one of the updates in the list as "latest", or does an exact "max_version" need to be specified (or does "latest" + "version" solve that). @iliana probably has TUF opinions about this. |
Direct access instead of having to filter through an iterator is more ideal.
should probably be
|
The waves can similarly be more direct / less verbose. Maybe something like:
since they're just points on a graph. (Not a fan of having to stringify the bounds since JSON only permits strings for object keys, though.) "bound_max" doesn't feel like something we need to provide, it can be inferred by the client. |
Yeah, if we're not worried about seeing arbitrary partitions then we could even just have
|
Ack, but I'm assuming we still need distinct start and end times per wave |
Wouldn't the start/end times just be the boundaries? Unless you actively want gaps in between waves, which I'm not certain is necessary. |
I agree with you here, and it makes parsing it a little nasty. Would a better compromise be something like this?
|
I would +1 once I figure out if Serde can decode something like that directly into a |
In addition to the upgrade wave architecture and facilities, the update metadata, as proposed, overlaps with what we'll likely want in order to integrate other "update orchestrators" (for cooperating with the likes of Kubernetes and ECS). Let me re-read and digest some of the proposed metadata here a bit more. I've identified data I had in mind already for such an implementation. I think this metadata can accommodate much of what's drafted already. As for the update orchestrator itself, I'll open an issue to drive its integration concerns. |
A quick thought: do we need to specify the datastore version associated with a "thar" version as well, given that they are not necessarily in lock-step? |
The metadata regarding datastore version for a given thar version is useful information, but I think the slated migration & datastore-upgrade design mitigates the need for the update process to consider it at all - barring a nifty policy to control such upgrades. That mapping does offer insight for response like in the case of traditional software upgrades that may require or not-require a database migration. That is to say: one with a migration its likely more intrusive and risky than an upgrade that doesn't include a database migration so extra care may be taken. In our case, I would hazard a guess and say that an upgrade which doesn't include a datastore migration is marginally less-dangerous than one with a migration given that the current designs do account for some of the known-unknowns and will be, tentatively, tested for each permutation logically possible migration path. |
Yep, these should be in-image already, but the design does mention the possibility of specifying migration "fix-ups" that are separately downloaded. These could be covered by just releasing a new imagine version as well though. |
Yeah, we need migrations listed in the update metadata to be able to roll back. The "old" image won't have those migrations.
Not if you've broken the update or migration process! We need rollbacks. I haven't been involved in this discussion yet, but the basic idea is covered in the (internal, hopefully in-repo soon) migration doc. I've been assuming that whatever metadata we decide here would either include that or would be extensible enough to easily add it... |
Following on from #91 we may have a metadata format that looks something like:
Where we have
So for a given update the client would lookup the target datastore version, and then lookup the series of migrations needed to get from the current version to the target version. As mentioned in #91 these mappings could also be nested arrays rather than pretend-tuples to make the JSON parsing easier; either way I think we're in for some |
With the base metadata now described in https://github.com/amazonlinux/PRIVATE-thar/blob/develop/workspaces/updater/updog/src/main.rs#L109 and surrounds I'll close this issue. There may be some extensions that build upon this but lets cover those in their own issues. |
Currently updog is loading some very basic metadata (currently called manifest.json) and some hardcoded target names. In order to support multiple architectures, flavors, and update waves, we need to formalize what the metadata that sits between the crunchy TUF shell and the gooey partition data + migrations looks like.
The text was updated successfully, but these errors were encountered: