-
Notifications
You must be signed in to change notification settings - Fork 62
[18/n] add target release generation to planning input #8401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[18/n] add target release generation to planning input #8401
Conversation
Created using spr 1.3.6-beta.1 [skip ci]
Created using spr 1.3.6-beta.1
Actually, I don't think this is quite right in general, though it seems to be the logic for |
Created using spr 1.3.6-beta.1 [skip ci]
Created using spr 1.3.6-beta.1
Updated this PR to also include the generation number for the old repo. See #8056 (comment) for some more discussion. |
plotnick
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
| // This generation is successively incremented for each TUF repo. We use | ||
| // generation 2 to represent the first generation with a TUF repo | ||
| // attached. | ||
| let target_release_generation = Generation::new().next(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just me, but I find Generation::from_u32(2) more explicit and therefore preferable for this kind of initialization. It keeps me from wondering (even though I know) whether the initial generation is 0 or 1 each time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I'm fine with either. Changed it to from_u32.
| }; | ||
|
|
||
| self.tuf_repo = new_repo; | ||
| // XXX: should this set old_repo to the current tuf_repo? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any reason why not, and I can't think of any other places where we'd want to do so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly I'd say it's because old_repo is really the current release. This system doesn't enforce the invariant that setting a new target release is disallowed while an update is in progress (#8056 (comment)) so I think an unconditional self.old_repo = self.tuf_repo isn't quite what we want.
We may want to have a more explicit operation for this in the future, but it depends on how tests for this feature play out -- so I'd like to defer this until we get some more experience with that.
| .internal_context("fetching current target release")? | ||
| .internal_context("fetching previous target release")?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, thanks!
Add a new
TufRepoPolicystruct that currently consists of the target release generation plus theTufRepoDescription, and use it for bothtuf_repoandold_repo.We're going to need
tuf_repo.generationfor MUPdate override work (RFD 556). We don't anticipate needingold_repo.generationfor now but logically it makes sense to have.Depends on: