-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merkle tree proof implementation #285
base: main
Are you sure you want to change the base?
Conversation
One for the TODO would be something for examples/ Nice work btw! |
I added examples for consistency and inclusion proofs. I will also add some tests in the future to the methods I added to the Rekor models. What test data should I use for this? As these require log entries + keys. Also I'm unsure whether my checks to ensure whether proofs and checkpoints are valid together are correct. What is the correct way to do this? |
Sorry, this has been sitting on my review list for a long time. I need some time to get familiar with the feature being implemented 😓 |
No worries! I'm not in a rush to complete this. |
I received an error with the current implementation when I tested for the entry with index ProblemI received the following error although the verification should have succeeded.
Underlying problemThe underlying error is, I believe, related to the assumption that the body is formatted as canonical JSON when it is appended to the log. Indeed, to verify that the body is included in the merkle tree, the implementation currently reserializes the body using a canonical formatter. However, I think the original entry wasn't formatted using a canonical formatter (see files below). To fixWe can't make the assumption that the body was formatted canonically, so I think the solution should be to keep the "original" base64-decoded body as bytes in the If you agree with the fix, I'm happy to work on it. To reproduceThe following code: let rekor_config = Configuration::default();
let rt = tokio::runtime::Runtime::new().unwrap();
let log_entry = rt.block_on(get_log_entry_by_index(&rekor_config, 25579)).unwrap();
let mut encoded = vec![];
let mut ser = serde_json::Serializer::with_formatter(
&mut encoded,
olpc_cjson::CanonicalFormatter::new(),
);
log_entry.body.serialize(&mut ser).unwrap();
println!("{}", String::from_utf8(encoded).unwrap()); returns: {"apiVersion":"0.0.1","kind":"rekord","spec":{"data":{"hash":{"algorithm":"sha256","value":"ce9a7c82f32194995888758cf107ef0cc52e0b8cdce73b4240658ee9e73783cb"}},"signature":{"content":"MGUCMD3oKzgsGnPAkJEXegDIsdlh4BFCQbM6jng4Sy3axY/+2tlK97oe/CkxabT1ZXUqCAIxAJDq+zLfRZZEJD5DvaKhFEu+Jm+jD4UXc3CaZp2MSajiralmtalA6fSGCXjwGfUzOw==","format":"x509","publicKey":{"content":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNrakNDQWhpZ0F3SUJBZ0lVQU0rK0dYRFN5bUNPSW82YmxMMG5EZngxb21nd0NnWUlLb1pJemowRUF3TXcKS2pFVk1CTUdBMVVFQ2hNTWMybG5jM1J2Y21VdVpHVjJNUkV3RHdZRFZRUURFd2h6YVdkemRHOXlaVEFlRncweQpNVEEzTWpnd09ETTNOREphRncweU1UQTNNamd3T0RVM05ERmFNQUF3ZGpBUUJnY3Foa2pPUFFJQkJnVXJnUVFBCklnTmlBQVJjMDMrUU4vTHBrOGpqUFQwTmV5a01ucm9mMnpZUkJxNm05ei9TMXhRSzduZnZhU3grRjUrTEtwN3gKR2ExbHY2SWNvRXdwUHA2MUdsYnd5S0VQVWJLdzJrYnJyRVpPMnhKV3kxb0VEUHBYMlJqcTBYS0RZcEF5Zi9mQwoyZzJjSnVtamdnRW5NSUlCSXpBT0JnTlZIUThCQWY4RUJBTUNCNEF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVICkF3TXdEQVlEVlIwVEFRSC9CQUl3QURBZEJnTlZIUTRFRmdRVTRBUDhtTkI4ejhSZFJyTVVLZ1A2Mm0xUFErd3cKSHdZRFZSMGpCQmd3Rm9BVXlNVWRBRUdhSkNreVVTVHJEYTVLN1VvRzArd3dnWTBHQ0NzR0FRVUZCd0VCQklHQQpNSDR3ZkFZSUt3WUJCUVVITUFLR2NHaDBkSEE2THk5d2NtbDJZWFJsWTJFdFkyOXVkR1Z1ZEMwMk1ETm1aVGRsCk55MHdNREF3TFRJeU1qY3RZbVkzTlMxbU5HWTFaVGd3WkRJNU5UUXVjM1J2Y21GblpTNW5iMjluYkdWaGNHbHoKTG1OdmJTOWpZVE0yWVRGbE9UWXlOREppT1daallqRTBOaTlqWVM1amNuUXdIZ1lEVlIwUkFRSC9CQlF3RW9FUQpZM1JoWkdWMVFHZHRZV2xzTG1OdmJUQUtCZ2dxaGtqT1BRUURBd05vQURCbEFqRUE3TTJwSzhRUFRrSGs1bzZ0CmdnampZdjBLV1BUajRKUTAwU3RjR0xqa1g3SU1iNC9HdXpYRkQ4czZDOEd3NmpwMEFqQW1Xa2JROTVsMzlnUGQKR2pjUjBSQURaT0dYb0NPQURwOE5lSzhBL2dKdWdnR0ZINHZYZ2l1ODJsQm5MOEZSc09jPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="}}}} when a direct call to the api: curl 'https://rekor.sigstore.dev/api/v1/log/entries?logIndex=25579' | jq -r '.[ (keys_unsorted)[0] ].body' | base64 -d returns: {"apiVersion":"0.0.1","spec":{"data":{"hash":{"algorithm":"sha256","value":"ce9a7c82f32194995888758cf107ef0cc52e0b8cdce73b4240658ee9e73783cb"}},"signature":{"content":"MGUCMD3oKzgsGnPAkJEXegDIsdlh4BFCQbM6jng4Sy3axY/+2tlK97oe/CkxabT1ZXUqCAIxAJDq+zLfRZZEJD5DvaKhFEu+Jm+jD4UXc3CaZp2MSajiralmtalA6fSGCXjwGfUzOw==","format":"x509","publicKey":{"content":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNrakNDQWhpZ0F3SUJBZ0lVQU0rK0dYRFN5bUNPSW82YmxMMG5EZngxb21nd0NnWUlLb1pJemowRUF3TXcKS2pFVk1CTUdBMVVFQ2hNTWMybG5jM1J2Y21VdVpHVjJNUkV3RHdZRFZRUURFd2h6YVdkemRHOXlaVEFlRncweQpNVEEzTWpnd09ETTNOREphRncweU1UQTNNamd3T0RVM05ERmFNQUF3ZGpBUUJnY3Foa2pPUFFJQkJnVXJnUVFBCklnTmlBQVJjMDMrUU4vTHBrOGpqUFQwTmV5a01ucm9mMnpZUkJxNm05ei9TMXhRSzduZnZhU3grRjUrTEtwN3gKR2ExbHY2SWNvRXdwUHA2MUdsYnd5S0VQVWJLdzJrYnJyRVpPMnhKV3kxb0VEUHBYMlJqcTBYS0RZcEF5Zi9mQwoyZzJjSnVtamdnRW5NSUlCSXpBT0JnTlZIUThCQWY4RUJBTUNCNEF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVICkF3TXdEQVlEVlIwVEFRSC9CQUl3QURBZEJnTlZIUTRFRmdRVTRBUDhtTkI4ejhSZFJyTVVLZ1A2Mm0xUFErd3cKSHdZRFZSMGpCQmd3Rm9BVXlNVWRBRUdhSkNreVVTVHJEYTVLN1VvRzArd3dnWTBHQ0NzR0FRVUZCd0VCQklHQQpNSDR3ZkFZSUt3WUJCUVVITUFLR2NHaDBkSEE2THk5d2NtbDJZWFJsWTJFdFkyOXVkR1Z1ZEMwMk1ETm1aVGRsCk55MHdNREF3TFRJeU1qY3RZbVkzTlMxbU5HWTFaVGd3WkRJNU5UUXVjM1J2Y21GblpTNW5iMjluYkdWaGNHbHoKTG1OdmJTOWpZVE0yWVRGbE9UWXlOREppT1daallqRTBOaTlqWVM1amNuUXdIZ1lEVlIwUkFRSC9CQlF3RW9FUQpZM1JoWkdWMVFHZHRZV2xzTG1OdmJUQUtCZ2dxaGtqT1BRUURBd05vQURCbEFqRUE3TTJwSzhRUFRrSGs1bzZ0CmdnampZdjBLV1BUajRKUTAwU3RjR0xqa1g3SU1iNC9HdXpYRkQ4czZDOEd3NmpwMEFqQW1Xa2JROTVsMzlnUGQKR2pjUjBSQURaT0dYb0NPQURwOE5lSzhBL2dKdWdnR0ZINHZYZ2l1ODJsQm5MOEZSc09jPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="}}},"kind":"rekord"} Note that the the |
Thanks for the thorough triage @gaetanww! I won't speak too much on this as I haven't tried this out myself, but I suspect that the issue you're seeing regarding canonicalization may be related to the canonicalizer being used. This implementation currently uses olpc_cjson, while Rekor uses RFC 8785 canonicalization. (OLPC canonical JSON is regrettably distinct from RFC 8785 canonical JSON.) In the |
np! Thanks for the pointer, yes that could be it. I think the following code should work (using the pub fn verify_inclusion(&self, rekor_key: &CosignVerificationKey) -> Result<(), SigstoreError> {
self.verification
.inclusion_proof
.as_ref()
.ok_or(UnexpectedError("missing inclusion proof".to_string()))
.and_then(|proof| {
// encode as canonical JSON
let mut json_value = json_syntax::Value::from_serde_json(serde_json::to_value(&self.body)?);
json_value.canonicalize();
let encoded_entry = json_value.compact_print().to_string().into_bytes();
proof.verify(&encoded_entry, rekor_key)
})
} |
I hopefully should be able to take a closer look at this some time next week, and apply the fix/update the PR. |
One more data point on issue above. It works perfectly without modification with index: |
I haven't looked at it in a few months, but if I remember correctly there are two types of indices. With one being the index in the log and one being the index in the tree. So you might be using the incorrect index. |
@gaetanww is your issue resolved? |
Sorry for the slow response. I made it work for the latest rekor records (e.g., index |
I did some investigating, I think @gaetanww your original analysis was correct and the formatting of the log entry is the issue, as both serializers produce identical outputs for me. The Go client also uses the Base64 encoded entry provided by the log, is this the correct way to handle this? I feel like this might conflict with the description on how SETs are supposed to be verified. |
Yes, the documentation does conflict with the actual definition. There should at least be a comment in the go code to explain why it's implemented that way. |
Relates to: sigstore#283 Signed-off-by: Victor Embacher <victor@embacher.xyz>
…added some functionality. Relates to: sigstore#283 Signed-off-by: Victor Embacher <victor@embacher.xyz>
Signed-off-by: Victor Embacher <victor@embacher.xyz>
…tions. Signed-off-by: Victor Embacher <victor@embacher.xyz>
Signed-off-by: Victor Embacher <victor@embacher.xyz>
Signed-off-by: Victor Embacher <victor@embacher.xyz>
acbcb91
to
e45c952
Compare
This is failing on 32bit targets because of the use of |
Howdy 👋 is there someone in the community equipped to adequately review this MR? We at 1Password would be happy to review elements of this work but the core team that has directed energy towards this project doesn't have previous experience with merkle tree proof implementations. |
@tannaurus I'm going to look at the merkle tree proof as I previously did a lot of work on a different client there. However I would need someone else to do the rust review. |
Signed-off-by: Victor Embacher <victor@embacher.xyz>
Sorry my ability to read rust is bad. I'm a little confused by the edits in rekor/models -- those seem like generated files? But the added code doesn't seem generated? Maybe I'm just reading rust wrong. The inclusion proof looks like its following the spec in 6962, there's a slightly different implementation in 9162 -- I don't know if it matters but it's not recursive. It might be valuable to add some tests on the rekor inclusion proof itself (rather than the sub-functions): one random example (from the java client):
and then another failure test where you mangle this data a bit. |
…ion proofs at this level Signed-off-by: Victor Embacher <victor@embacher.xyz>
…ypes Signed-off-by: Victor Embacher <victor@embacher.xyz>
Signed-off-by: Victor Embacher <victor@embacher.xyz>
Yeah afaik it's a mix of auto-generated files from the OpenAPI spec which was then modified by hand.
If my understanding is correct the semantics of the Merkle trees between 6962 and 9162 are identical. Also the Go cosign implementation uses an implementation based on RFC 6962.
Good point! I added more tests based on Rekor responses to the code. |
The inclusion proofs seem to work as expected, but I'm having issues with the consistency proofs. The doctest example fails as of right now. I'm pretty sure it used to work in the past, but I'm not 100% certain. I added a test that to my understanding should be accepted, but it fails. Did something on the Rekor side change? |
I tracked down the issue. The following sequence of events triggers it:
This is a just bug in my implementation. |
Signed-off-by: Victor Embacher <victor@embacher.xyz>
…ctions Signed-off-by: Victor Embacher <victor@embacher.xyz>
I pushed a fix that should address the issue. I still want to improve the tests for the consistency proofs. |
We haven't integrated it into the rest of |
Summary
Related to: #283
This adds implementations for:
The Merkle proofs are essentially ports of the transparency-dev implementations, including the test suite.
The checkpoint related code is based on the implementation in the
rekor
Go package.I also changed the Rekor models to use the new
SignedCheckpoint
type, which implements theserde
traits. For now I think this is the only major breaking change, the rest are mostly new or private APIs.I have not implemented the logic to verify that Checkpoints and the corresponding consistency/inclusion proof are sound together. I want to discuss how to handle this here.
Release Note
SigstoreError
enumCheckpoint
type to handle verification of checkpoints/STHssigned_tree_head
fiekd fromString
toCheckpoint
checkpoint: Option<Checkpoint>
toInclusionProof
structDocumentation
LogInfo
andLogEntry
due to changes to struct fields, this should only require minor changesTodos: