Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TAP for keyid flexibility #112
Add TAP for keyid flexibility #112
Changes from 6 commits
31095de
19dcf52
78b5251
008049e
161bf47
c764e9c
aa95c2c
36cf4dd
a232e20
4a43010
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: I'd either point directly to that conversation https://python.zulipchat.com/#narrow/stream/223926-pep458-implementation/topic/Timeline/near/188666946 (given that the link is permanent-ish, which I don't know), or leave the link out altogether.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that there wouldn't be a cryptographically derived keyid for each key, it's now possible to list the same key multiple times, each with a different keyid. Can we instead impose another rule that the key must also be unique? Without this rule, an attacker could duplicate a signature to reduce the effective threshold of keys needed to validate the metadata. We do this in go-tuf and rust-tuf to avoid double counting keys due to our (temporary) support of python-tuf's
keyid_hash_algorithms
field.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good point. If there are multiple delegations to the same KEYID as part of a threshold, is this allowed? If I delegate to both Alice and Bob, who both delegate to Charlie, is Charlie's approval enough? (I think I know the answer in both cases and this is mostly tangental to this issue, but think we should discuss if we think TUF is doing the right thing.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a description of this in the scenarios below. As currently written, in this situation Charlie's approval would be enough. I think that this should be sufficient to trust the metadata as a threshold of signatures using unique keys must be reached to obtain Charlie's approval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have less practical experience with tuf than the other folks in this discussion, but I would be surprised that Charlie's approval would be enough. My understanding of the specification is that thresholds are a defence against key compromise. In the scenario described we only have to compromise a single key in order to meet a threshold that is > 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds like the "Deadly Diamond of TUF" 😜
Jokes aside, I also think that Charlie's approval is enough. But in reality it required Justin's, Alice's and Bob's approval to even end up in a situation, where it only requires Charlie's approval.
I agree that it doesn't sound ideal. So either:
"terminating"
flag in delegations is for).Do others agree? @JustinCappos, you really got me curious about your answers to these cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current algorithm returns with the first match. So suppose that Alice (or Bob) also delegated trust to Daniela, who also approved the target. You should approve it in this case, but the current algorithm would not do so. You'd need to match all parties that agree. Then you'd need to be sure that Alice and Bob each have at least one person that was not yet chosen who they could select to fulfill this need. Also, what if Alice and Bob have threshold delegations to Charlie and Daniela in this case? It's just a lot messier to deal with, in my view.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this issue pertains more to the existing delegation resolution algorithm than to the keyid calculation, so I propose we open a new issue to discuss this issue further. As @lukpueh mentioned, TAP 3 is not currently included in the specification, so this issue does not exist with the current specification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a closer look at this issue, it appears that the
visited
flag ensures that Charlie's role will not be visited more than once in the DFS as described in 4.4.1 of the client workflow. Does this solve the issue @JustinCappos @lukpueh?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this solves it. I don't think it would handle the case above with Daniela, for example.
I do think it would be fine to move this elsewhere, but I think the most important question is what should the system actually do? I think we could code it to work however, but I think we should definitely strive to do what a user would expect first and foremost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I created a new issue to continue this discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for clarity, can we generalize this early on?
A role that authorizes the signing keys (by keyid) for another role and has its public keys == a role delegating trust to another role == a delegating role == root or delegating targets role (including top-level targets).
You do generalize it in the next line ...
where, I think, you are referring to root and delegating targets, right?
Anyway, it's not a big thing. I just want to make sure everyone is on the same page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So @mnm678 and I discussed this quite a bit, and I seem to remember that we agreed that a client-side key or keyid uniqueness check within a delegation only prevents scenarios, where the repository owner already screwed up, i.e. uses the same keyid for different keys, or different keyids for one key.
Or am I missing something?
If not, I wonder if we really want to make client verification more complex, because the client can't trust repositories to do their job right. I mean, the client also doesn't check for faux-pas like the delegation scenario, Justin described above, or for a repository using the same key to sign root and timestamp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple of different uniqueness requirements that the keyid/key systems needs. I agree that where possible we want to simplify the client workflow by trusting information found in signed metadata files. The situations in which uniqueness is needed include:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the extra clarification, @mnm678! This makes sense to me. Regarding the third item, I left some thoughts in the discussion above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would we keep track of duplicate public keys? I mean, what if there are the same public keys, but listed in different formats? (I know we use only PEM right now, but humor me.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would guess that if multiple formats are supported, they would need to be converted to a consistent format for signature verification, so the uniqueness check would take place after any conversion.
Are there any use cases in which multiple formats would be used but not be able to be converted? In this case we might need to have some more discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But once again (and thanks for pointing this out, @SantiagoTorres), if the attacker does not control the metadata that lists the public keys, then they cannot make different representations of the same key each count towards the threshold.
And if the attacker does control the metadata, they can add any key they want and uniqueness checks on the client are moot.
This means that we really only need to de-duplicate on the one key representation in the delegating metadata. Unless, and this brings me back to my initial comment above, we don't trust the delegating role.
Sorry for adding to the confusion with my comment above, I was briefly confused too. Maybe we should re-hash the threat model for this document.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I was more interested in someone (e.g., the delegating role) trying to deliberately fool
python-tuf
into accepting what are actually duplicate keys by using different public key formats which actually describe the same public key parameters.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but this delegating role could also just change the threshold? My impression is that this is necessary because we want to avoid footguns, but it's not a very exploitable attack vector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but it's still something to think about, I think, especially if the client-side checks are not expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The unique key check already exists to ensure that an attacker cannot append duplicate signatures (although this requirement is not explicitly stated in the specification). The main difference here is requiring that the client do this uniqueness check on the key instead of the keyid. If all goes correctly, using the key does not make any difference, but if the metadata accidentally has duplicate keys with different keyids, using keyids to verify uniqueness of keys could lead to a key being applied to a threshold more than once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand why this is needed anymore. Just generate a new file with whatever new KEYID scheme you want. All consumers didn't understand how you got the KEYIDs anyways, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The process is pretty straightforward, but the metadata owner should ensure that the keyids remain unique throughout any transition. This might not need a whole section, but I think that it's important to note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You touch upon this in the specification, but it might be helpful to also include an example here for the case where the targets metadata file A delegates to C using the key D and the keyid K, as well as another targets metadata B delegates to C using the key E and the keyid K. From what I understand, this should result in C being signed with:
Both signatures will be checked if the delegation search ever ends up at A or B.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added, thanks for the suggestion!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be finished, ofc 🙂