-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: consolidate test helpers #169
Conversation
WalkthroughThe changes in this pull request involve the removal of the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (7)
crates/storage/src/rocksdb.rs (2)
128-128
: Handle potential deserialization errors gracefullyWhen decoding
latest_value
withOwnedValue::decode_from_bytes(&v)
, any deserialization errors will propagate asErr
. Ensure that these errors are appropriately handled or logged to aid in debugging.
155-156
: Consider the performance impact of hex encodingConverting binary data to hex with
to_hex()
increases the storage size and may affect performance due to larger keys and values. Evaluate whether storing binary data directly without hex encoding is feasible and more efficient.crates/common/src/transaction_builder.rs (3)
Line range hint
172-174
: Handle missing account keys without panickingIn functions where account keys might be missing, using
panic!
can cause the entire application to crash. Consider returning aResult
to allow the caller to handle the error appropriately.For example, change:
let Some(account_signing_key) = self.account_keys.get(id).cloned() else { panic!("No existing account key for {}", id) };To:
let account_signing_key = self.account_keys.get(id).cloned() .ok_or_else(|| anyhow!("No existing account key for {}", id))?;And adjust the function's return type to propagate the error.
216-216
: Useunwrap_or_else
instead ofmap_or
for clarityIn
let last_hash = self.hashchains.get(id).map_or(Digest::zero(), Hashchain::last_hash);
, consider usingunwrap_or_else
for better readability.Apply this change:
-let last_hash = self.hashchains.get(id).map_or(Digest::zero(), Hashchain::last_hash); +let last_hash = self.hashchains.get(id) + .map(Hashchain::last_hash) + .unwrap_or_else(Digest::zero);
263-310
: Refactor duplicated code in data addition methodsThe methods
add_randomly_signed_data
,add_randomly_signed_data_verified_with_root
,add_signed_data
, andadd_signed_data_verified_with_root
contain similar logic. Consider refactoring to reduce code duplication and improve maintainability.You can create a generalized function to handle common logic and have these methods call it with appropriate parameters.
crates/serde/src/hex.rs (1)
4-15
: Consider adding error handling to ToHex for consistency.While the current implementation is clean, consider adding error handling to match the pattern used in
ToBinary
and maintain consistency across the codebase.pub trait ToHex { - fn to_hex(&self) -> String; + fn to_hex(&self) -> Result<String>; } impl<T> ToHex for T where T: hex::ToHex, { - fn to_hex(&self) -> String { - self.encode_hex() + fn to_hex(&self) -> Result<String> { + Ok(self.encode_hex()) } }crates/storage/src/redis.rs (1)
117-117
: Consider batching hex encoding operationsThe current implementation performs multiple hex encoding operations within the Redis pipeline. Consider batching these operations to improve performance.
fn write_node_batch(&self, node_batch: &NodeBatch) -> Result<()> { let mut con = self.lock_connection()?; let mut pipe = redis::pipe(); + let encoded_nodes: Vec<_> = node_batch + .nodes() + .map(|(key, node)| { + Ok(( + format!("node:{}", key.encode_to_bytes()?.to_hex()), + node.encode_to_bytes()? + )) + }) + .collect::<Result<_>>()?; - for (node_key, node) in node_batch.nodes() { - let serialized_key = node_key.encode_to_bytes()?.to_hex(); - let node_data = node.encode_to_bytes()?; - pipe.set(format!("node:{}", serialized_key), node_data); + for (key, data) in encoded_nodes { + pipe.set(key, data); }Also applies to: 125-125, 139-139, 145-146
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (26)
Cargo.toml
(1 hunks)crates/common/Cargo.toml
(0 hunks)crates/common/src/digest.rs
(3 hunks)crates/common/src/hashchain.rs
(1 hunks)crates/common/src/lib.rs
(0 hunks)crates/common/src/test_utils.rs
(0 hunks)crates/common/src/transaction.rs
(1 hunks)crates/common/src/transaction_builder.rs
(11 hunks)crates/common/src/tree/mod.rs
(2 hunks)crates/common/src/tree/proofs.rs
(1 hunks)crates/common/src/tree/snarkable_tree.rs
(1 hunks)crates/da/Cargo.toml
(0 hunks)crates/da/src/celestia.rs
(2 hunks)crates/da/src/lib.rs
(3 hunks)crates/keys/src/lib.rs
(4 hunks)crates/keys/src/verifying_keys.rs
(5 hunks)crates/node_types/prover/src/prover/tests.rs
(3 hunks)crates/serde/Cargo.toml
(1 hunks)crates/serde/src/base64.rs
(1 hunks)crates/serde/src/binary.rs
(1 hunks)crates/serde/src/hex.rs
(1 hunks)crates/serde/src/lib.rs
(2 hunks)crates/storage/Cargo.toml
(0 hunks)crates/storage/src/redis.rs
(5 hunks)crates/storage/src/rocksdb.rs
(4 hunks)crates/tests/src/lib.rs
(1 hunks)
💤 Files with no reviewable changes (5)
- crates/common/src/lib.rs
- crates/common/Cargo.toml
- crates/da/Cargo.toml
- crates/storage/Cargo.toml
- crates/common/src/test_utils.rs
✅ Files skipped from review due to trivial changes (2)
- crates/common/src/hashchain.rs
- crates/common/src/tree/proofs.rs
🔇 Additional comments (31)
crates/common/src/digest.rs (1)
Line range hint 1-97
: Enhancements to Digest
struct are well-implemented
The additions of new methods such as new
, from_hex
, from_base64
, and to_bytes
enhance the usability of the Digest
struct. Implementing FromHex
and FromBase64
for Digest
promotes consistent encoding and decoding practices.
crates/common/src/tree/snarkable_tree.rs (1)
Line range hint 7-161
: Transaction processing enhancements are sound
The updates to the process_transaction
, insert
, and update
methods improve error handling and add necessary checks, enhancing the robustness of the transaction processing logic.
crates/storage/src/rocksdb.rs (4)
11-14
: Imports of prism_serde
modules are appropriate
The inclusion of FromBinary
, ToBinary
, FromHex
, and ToHex
from prism_serde
replaces the previous serialization dependencies effectively.
119-120
: Verify correctness of key slicing for version extraction
In line 120, the version is extracted using key[value_key.len() + 1..]
. Ensure that this slicing accurately captures the version segment of the key. An off-by-one error or incorrect slicing logic could lead to failures in decoding the version.
Consider adding unit tests to validate the key slicing and version decoding.
138-140
: Ensure correct key slicing when decoding node keys
The slicing operation key[KEY_PREFIX_NODE.len()..]
is used to extract the node key for decoding. Verify that this approach consistently extracts the intended portion of the key, especially if KEY_PREFIX_NODE
changes in the future.
161-162
: Maintain consistency in key formatting
The construction of version_key
using format!("{}:{}", value_key, version.encode_to_bytes()?.to_hex())
should be consistent with key formats elsewhere in the codebase. Inconsistencies can lead to difficulties in data retrieval and maintenance.
Also applies to: 165-165
crates/common/src/tree/mod.rs (3)
33-58
: Tests updated to reflect the new hashchain-based architecture
The test cases have been appropriately updated to use TransactionBuilder
and KeyDirectoryTree
with the new hashchain implementation. This ensures that the tests are aligned with the current system design.
64-78
: Proper error handling in test for nonexistent service
The test test_insert_for_nonexistent_service_fails
correctly asserts that inserting an account with a nonexistent service results in an error, ensuring robustness.
83-109
: Effective validation in test for invalid service challenge
The test test_insert_with_invalid_service_challenge_fails
effectively checks that an account creation with an invalid service signing key fails as expected.
crates/serde/Cargo.toml (1)
15-15
: Addition of serde_bytes
dependency is appropriate
Including serde_bytes
aligns with the project's serialization requirements and helps handle byte slices efficiently.
crates/serde/src/binary.rs (2)
4-15
: LGTM! Clean separation of encoding functionality.
The ToBinary
trait follows the single responsibility principle by focusing solely on encoding. The generic implementation for all Serialize
types is a good approach.
17-27
: LGTM! Well-designed deserialization trait.
The FromBinary
trait is well-designed with:
- Appropriate
Sized
bound - Flexible input through
AsRef<[u8]>
- Generic lifetime handling with
for<'de>
crates/serde/src/hex.rs (1)
17-33
: LGTM! Well-structured error handling.
The FromHex
trait has excellent error handling with appropriate bounds and type constraints.
crates/da/src/lib.rs (3)
5-8
: LGTM: Import changes align with serialization standardization
The replacement of BinaryTranscodable
with more specific traits (ToBinary
, FromHex
, ToHex
) improves code clarity by explicitly declaring the required functionality.
31-31
: LGTM: Signature encoding improvement
The change from hex::encode
to to_hex()
aligns with the new standardized serialization approach.
50-51
: LGTM: Signature decoding improvement
The change to use Vec::<u8>::from_hex()
maintains the same functionality while using the new standardized trait methods.
Cargo.toml (1)
50-50
: LGTM: Added serde_bytes dependency
The addition of serde_bytes
is appropriate for improved binary serialization handling and aligns with the serialization standardization efforts.
crates/tests/src/lib.rs (1)
94-94
: LGTM: Method name clarifies signed account creation
The change from create_account_with_random_key
to create_account_with_random_key_signed
improves code clarity by explicitly indicating that the account creation includes signing.
crates/keys/src/lib.rs (2)
13-13
: LGTM: Consistent use of prism_serde for encoding
The change to use prism_serde::base64::ToBase64
aligns with the consolidation of encoding methods across the codebase.
104-104
: LGTM: Consistent base64 encoding implementation
The replacement of direct base64 encoding with the to_base64()
method provides a more uniform approach across the test cases.
Also applies to: 117-117, 129-129
crates/storage/src/redis.rs (2)
7-10
: LGTM: Structured imports from prism_serde
The imports are well-organized and clearly separate binary and hex encoding concerns.
84-84
: Verify error handling for hex encoding/decoding of node keys
While the conversion to hex encoding is correct, ensure proper error handling for malformed hex data in the Redis store.
Also applies to: 98-98
✅ Verification successful
Let me analyze the hex encoding/decoding error handling by checking the implementation of the Redis storage.
Let me try a different approach to check the Redis storage implementation.
Let me try one final approach to check the Redis implementation.
Hex encoding/decoding is properly handled with error propagation
The codebase shows proper error handling for hex encoding/decoding operations in the Redis storage implementation:
- In
get_node_option
: Theto_hex()
operation is used within a?
operator chain, propagating any encoding errors - In
get_rightmost_leaf
: Thefrom_hex()
operation uses the?
operator to handle decoding errors - In
get_value_option
: Thefrom_hex()
operation onOwnedValue
is properly error-handled with?
- In
write_node_batch
: Theto_hex()
operations are properly chained with?
for error propagation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for error handling in hex encoding/decoding
rg -A 3 'from_hex|to_hex.*\?'
Length of output: 3663
Script:
#!/bin/bash
# Check Redis storage implementation for error handling
ast-grep --pattern 'impl Storage for RedisStorage {
$$$
}'
Length of output: 62
Script:
#!/bin/bash
# Check Redis storage implementation for error handling
rg -A 10 "impl.*RedisStorage" crates/storage/src/redis.rs
Length of output: 59
Script:
#!/bin/bash
# Check the Redis storage file content
cat crates/storage/src/redis.rs
Length of output: 7799
crates/node_types/prover/src/prover/tests.rs (3)
25-28
: LGTM: Consistent use of signed account creation
The change to use create_account_with_random_key_signed
ensures that all test accounts are properly signed.
57-59
: LGTM: Test maintains transaction validation integrity
The transition to signed account creation maintains the test's validation logic while ensuring proper account signatures.
100-100
: LGTM: Invalid transaction test properly updated
The test case for invalid transactions correctly uses the signed account creation method while maintaining its original purpose of testing invalid transaction handling.
crates/keys/src/verifying_keys.rs (4)
20-23
: LGTM! Well-organized imports
The new imports from prism_serde
are appropriately scoped and support the base64 encoding/decoding functionality.
167-167
: LGTM! Clean implementation
The simplified public key generation maintains the same functionality while being more concise.
224-236
: LGTM! Clean and consistent implementations
The String conversion and Display implementations make good use of the new base64 traits, maintaining consistency throughout the codebase.
Line range hint 187-219
: Consider adding support for Secp256r1 keys
While the implementation is well-documented and handles Ed25519 and Secp256k1 keys correctly, it doesn't support decoding Secp256r1 keys despite the enum variant existing. This could lead to confusion for users trying to decode Secp256r1 keys from base64.
Let's check if Secp256r1 keys are used in the codebase:
Consider adding Secp256r1 support with this change:
match bytes.len() {
32 => {
let vk = Ed25519VerifyingKey::try_from(bytes.as_slice())
.map_err(|e| anyhow!("Invalid Ed25519 key: {}", e))?;
Ok(VerifyingKey::Ed25519(vk))
}
33 | 65 => {
- let vk = Secp256k1VerifyingKey::from_slice(bytes.as_slice())
- .map_err(|e| anyhow!("Invalid Secp256k1 key: {}", e))?;
- Ok(VerifyingKey::Secp256k1(vk))
+ // Try Secp256k1 first
+ if let Ok(vk) = Secp256k1VerifyingKey::from_slice(bytes.as_slice()) {
+ return Ok(VerifyingKey::Secp256k1(vk));
+ }
+ // Try Secp256r1 if Secp256k1 fails
+ let vk = Secp256r1VerifyingKey::from_sec1_bytes(bytes.as_slice())
+ .map_err(|e| anyhow!("Invalid Secp256k1/Secp256r1 key: {}", e))?;
+ Ok(VerifyingKey::Secp256r1(vk))
}
_ => Err(anyhow!("Invalid public key length")),
}
crates/da/src/celestia.rs (2)
9-12
: LGTM! Well-structured imports
The imports are appropriately organized and align with the codebase's approach to binary data handling.
Line range hint 94-102
: LGTM! Consistent hex decoding implementation
The change to use Vec::<u8>::from_hex
maintains the same functionality while being consistent with the codebase's approach to hexadecimal processing.
bfd6f66
to
de26795
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
crates/serde/src/lib.rs (1)
21-24
: Consider relaxing trait bounds for better reusability.The current trait bounds
T: ToHex + BytesSerialize
andT: FromHex + BytesDeserialize<'de>
might be too restrictive. Consider splitting these into separate functions if both traits aren't always needed.Also applies to: 34-38
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (27)
Cargo.toml
(1 hunks)crates/common/Cargo.toml
(0 hunks)crates/common/src/digest.rs
(3 hunks)crates/common/src/hashchain.rs
(1 hunks)crates/common/src/lib.rs
(0 hunks)crates/common/src/test_utils.rs
(0 hunks)crates/common/src/transaction.rs
(1 hunks)crates/common/src/transaction_builder.rs
(11 hunks)crates/common/src/tree/mod.rs
(2 hunks)crates/common/src/tree/proofs.rs
(1 hunks)crates/common/src/tree/snarkable_tree.rs
(5 hunks)crates/da/Cargo.toml
(0 hunks)crates/da/src/celestia.rs
(2 hunks)crates/da/src/lib.rs
(3 hunks)crates/keys/src/lib.rs
(4 hunks)crates/keys/src/verifying_keys.rs
(5 hunks)crates/node_types/prover/src/prover/mod.rs
(1 hunks)crates/node_types/prover/src/prover/tests.rs
(3 hunks)crates/serde/Cargo.toml
(1 hunks)crates/serde/src/base64.rs
(1 hunks)crates/serde/src/binary.rs
(1 hunks)crates/serde/src/hex.rs
(1 hunks)crates/serde/src/lib.rs
(2 hunks)crates/storage/Cargo.toml
(0 hunks)crates/storage/src/redis.rs
(5 hunks)crates/storage/src/rocksdb.rs
(4 hunks)crates/tests/src/lib.rs
(1 hunks)
💤 Files with no reviewable changes (5)
- crates/da/Cargo.toml
- crates/common/Cargo.toml
- crates/storage/Cargo.toml
- crates/common/src/lib.rs
- crates/common/src/test_utils.rs
🚧 Files skipped from review as they are similar to previous changes (17)
- crates/serde/Cargo.toml
- crates/keys/src/lib.rs
- Cargo.toml
- crates/common/src/hashchain.rs
- crates/common/src/transaction.rs
- crates/tests/src/lib.rs
- crates/common/src/tree/proofs.rs
- crates/da/src/lib.rs
- crates/common/src/tree/mod.rs
- crates/da/src/celestia.rs
- crates/serde/src/binary.rs
- crates/common/src/tree/snarkable_tree.rs
- crates/serde/src/hex.rs
- crates/node_types/prover/src/prover/tests.rs
- crates/keys/src/verifying_keys.rs
- crates/storage/src/redis.rs
- crates/serde/src/base64.rs
🔇 Additional comments (10)
crates/storage/src/rocksdb.rs (3)
11-14
: LGTM: Clean import structure
The imports are well-organized and specifically import only the required traits from prism_serde.
97-101
: Consider adding serialization format versioning
The switch from bincode to prism_serde affects data serialization across all TreeReader methods. While the implementation is correct, consider adding a version prefix to the serialized data to support future format changes and migrations.
Here's a suggested approach:
impl TreeReader for RocksDBConnection {
+ const SERIALIZATION_VERSION: u8 = 1;
+
fn get_node_option(&self, node_key: &NodeKey) -> Result<Option<Node>> {
- let key = format!("{KEY_PREFIX_NODE}{}", node_key.encode_to_bytes()?.to_hex());
+ let key = format!("{KEY_PREFIX_NODE}v{}{}", Self::SERIALIZATION_VERSION, node_key.encode_to_bytes()?.to_hex());
Also applies to: 111-128, 136-140
155-165
: Verify test coverage for serialization edge cases
While the implementation looks correct, we should ensure comprehensive test coverage for serialization edge cases.
Let's verify the test coverage:
✅ Verification successful
Serialization implementation and test coverage is adequate
The verification shows comprehensive test coverage and proper implementation:
- Serialization is handled through the
ToBinary
andFromBinary
traits inbinary.rs
- RocksDB implementation has tests covering key operations including:
- Node reading/writing with serialization
- Value history with versioning
- Error handling through Result types
- Multiple test cases for different scenarios (commitment, epoch, value versions)
- The serialization is backed by
bincode
, a well-tested serialization framework
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check test coverage for serialization methods
# Look for test cases covering serialization
rg -A 5 "encode_to_bytes|decode_from_bytes" "crates/storage/src/rocksdb.rs" | grep -B 5 "#\[test\]"
# Look for edge case tests
ast-grep --pattern 'fn test_$_($$$) {
$$$
assert!($_);
}'
Length of output: 163
Script:
#!/bin/bash
# Let's try a different approach to find test coverage
# Look for test modules and functions in the rocksdb.rs file
rg -B 2 -A 5 "#\[test\]" "crates/storage/src/rocksdb.rs"
# Search for any test files related to rocksdb
fd "test.*\.rs|.*_test\.rs" --type f
# Look for any serialization-related tests in the entire codebase
rg -l "encode_to_bytes|decode_from_bytes.*#\[test\]"
Length of output: 1103
Script:
#!/bin/bash
# Let's examine the serialization tests in more detail
# Check the test implementations in rocksdb.rs
rg -B 2 -A 10 "encode_to_bytes|decode_from_bytes" "crates/storage/src/rocksdb.rs"
# Look at the binary serialization tests
rg -B 2 -A 10 "encode_to_bytes|decode_from_bytes" "crates/serde/src/binary.rs"
# Check if there are any error handling tests
rg -A 5 "Error|unwrap|expect" "crates/storage/src/rocksdb.rs" | rg "#\[test\]" -B 5
Length of output: 3954
crates/common/src/digest.rs (2)
16-18
: LGTM! Good practice adding a const constructor.
The new
constructor provides a clean way to create a Digest
from raw bytes.
6-10
: Verify the impact of serialization changes.
The switch from raw_or_hex_fixed
to raw_or_hex
serialization might affect existing serialized data.
crates/serde/src/lib.rs (1)
38-38
: LGTM! Good error handling practice.
Adding the Display
trait bound for T::Error
ensures that error messages are human-readable.
Also applies to: 73-73
crates/common/src/transaction_builder.rs (3)
139-147
: LGTM! Clean implementation of account creation.
The new create_account_with_random_key
method provides a clear interface for account creation with proper key generation.
263-309
: LGTM! Well-structured signed data methods.
The new methods for handling signed data follow a consistent pattern and provide good separation of concerns between random and pre-signed data.
26-33
:
Replace expect
with proper error handling.
Using expect
can lead to runtime panics. Consider propagating the error instead.
- hc.add_entry(self.transaction.entry.clone())
- .expect("Adding transaction entry to hashchain should work");
+ hc.add_entry(self.transaction.entry.clone())?;
Likely invalid or redundant comment.
crates/node_types/prover/src/prover/mod.rs (1)
462-462
: Verify the direct use of unhashed ID for KeyHash creation
The change removes the intermediate hashing step when creating the KeyHash
. While this simplifies the code, we should verify:
- That removing the intermediate hashing step doesn't impact security (e.g., if it was protecting against certain types of attacks)
- That this change is consistent with how
KeyHash
is created in other parts of the codebase
Let's verify the usage pattern across the codebase:
✅ Verification successful
Direct use of ID for KeyHash creation is consistent with codebase patterns
The verification shows that all instances of KeyHash
creation across the codebase consistently use KeyHash::with::<Hasher>()
directly with the ID, without any intermediate hashing. The pattern in the changed line matches the standard usage found in:
crates/common/src/tree/snarkable_tree.rs
crates/common/src/tree/mod.rs
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how KeyHash is created across the codebase
# Expected results: Consistent usage of KeyHash::with without intermediate hashing
# Check for direct KeyHash creation patterns
rg "KeyHash::with" -A 2
# Check for any intermediate hashing before KeyHash creation
rg "Digest::hash.*KeyHash" -A 2
# Check for any other KeyHash creation patterns
ast-grep --pattern 'KeyHash::$_($$$)'
Length of output: 3427
a3ba083
to
9c9f42c
Compare
de26795
to
99ad174
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
crates/common/src/transaction_builder.rs (2)
216-216
: Consider extracting repeated hashchain access patternThe pattern
map_or(Digest::zero(), Hashchain::last_hash)
is repeated multiple times. Consider extracting it into a private helper method for better maintainability and DRY principle.+ fn get_last_hash(&self, id: &str) -> Digest { + self.hashchains.get(id).map_or(Digest::zero(), Hashchain::last_hash) + } pub fn add_key( &mut self, id: &str, key: VerifyingKey, signing_key: &SigningKey, key_idx: usize, ) -> UncommittedTransaction { - let last_hash = self.hashchains.get(id).map_or(Digest::zero(), Hashchain::last_hash); + let last_hash = self.get_last_hash(id);Also applies to: 249-249, 370-370
117-123
: Add documentation for account creation methodsThe different account creation methods (
create_account_with_random_key_signed
,create_account_signed
,create_account_with_random_key
) could benefit from documentation explaining their differences and use cases.Add documentation like this:
+ /// Creates a new account with a random key, using the stored service key for signing. + /// This is a convenience method that combines key generation and account creation. + /// + /// # Panics + /// Panics if no service key is found for the given service_id. pub fn create_account_with_random_key_signed( &mut self, id: &str, service_id: &str, ) -> UncommittedTransaction {Also applies to: 126-137, 139-147
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
crates/common/src/digest.rs
(2 hunks)crates/common/src/lib.rs
(0 hunks)crates/common/src/test_utils.rs
(0 hunks)crates/common/src/transaction_builder.rs
(11 hunks)crates/common/src/tree/mod.rs
(2 hunks)crates/node_types/prover/src/prover/tests.rs
(3 hunks)crates/tests/src/lib.rs
(1 hunks)
💤 Files with no reviewable changes (2)
- crates/common/src/lib.rs
- crates/common/src/test_utils.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- crates/tests/src/lib.rs
- crates/node_types/prover/src/prover/tests.rs
- crates/common/src/tree/mod.rs
🔇 Additional comments (3)
crates/common/src/digest.rs (1)
67-71
: LGTM! Clean implementation of From
trait.
The implementation follows the established pattern and enhances type safety by providing explicit conversion between Digest
and KeyHash
.
crates/common/src/transaction_builder.rs (2)
26-33
:
Error handling needs improvement
The .expect()
call could cause runtime panics. This is particularly concerning in the commit
method which is a core operation.
This issue was previously identified. Apply the suggested fix:
-hc.add_entry(self.transaction.entry.clone())
- .expect("Adding transaction entry to hashchain should work");
+hc.add_entry(self.transaction.entry.clone())?;
Update the function signature to return Result<Transaction, Error>
.
132-134
:
Replace panic with proper error handling
Panicking when service keys are not found is not appropriate for production code. This should be handled gracefully with proper error propagation.
Apply the suggested fix:
-let Some(service_signing_key) = self.service_keys.get(service_id).cloned() else {
- panic!("No existing service found for {}", service_id)
-};
+let service_signing_key = self.service_keys.get(service_id)
+ .cloned()
+ .ok_or_else(|| anyhow::anyhow!("No existing service found for {}", service_id))?;
Update the function signature to return Result<UncommittedTransaction, anyhow::Error>
.
99ad174
to
fee1138
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
crates/common/src/transaction_builder.rs (2)
263-282
: Refactor to reduce code duplication in data addition methodsThe methods
add_randomly_signed_data
andadd_randomly_signed_data_verified_with_root
have similar implementations with slight variations. Consider refactoring these methods to reduce code duplication and improve maintainability.You could abstract the common logic into a single method that accepts parameters to handle the variations, such as whether to verify with root or the signing key to use.
139-147
: Consolidate account creation methods to avoid redundancyThe methods
create_account_with_random_key_signed
andcreate_account_with_random_key
are similar, with the primary difference being the source of the service signing key. Consider unifying these methods or refactoring to reduce redundancy.You might create a single method that accepts an optional service signing key. If the key is not provided, it retrieves the key from
self.service_keys
.Example:
pub fn create_account_with_random_key( &mut self, id: &str, service_id: &str, service_signing_key: Option<&SigningKey>, ) -> UncommittedTransaction { let service_signing_key = match service_signing_key { Some(key) => key.clone(), None => self.service_keys.get(service_id).cloned().expect("No existing service found"), }; // Rest of the logic... }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
crates/common/src/lib.rs
(0 hunks)crates/common/src/test_utils.rs
(0 hunks)crates/common/src/transaction_builder.rs
(11 hunks)crates/common/src/tree/mod.rs
(2 hunks)crates/node_types/prover/src/prover/tests.rs
(3 hunks)crates/tests/src/lib.rs
(1 hunks)
💤 Files with no reviewable changes (2)
- crates/common/src/lib.rs
- crates/common/src/test_utils.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- crates/tests/src/lib.rs
- crates/common/src/tree/mod.rs
- crates/node_types/prover/src/prover/tests.rs
🔇 Additional comments (5)
crates/common/src/transaction_builder.rs (5)
216-218
: Ensure consistent handling of last_hash
when adding entries
The retrieval of last_hash
using map_or
could lead to unintended behavior if the hashchains
entry does not exist. Verify that initializing with Digest::zero()
is appropriate in these contexts.
Double-check that starting a new hashchain with a zero digest is the intended behavior.
If starting with a zero digest is acceptable, consider documenting this behavior for clarity.
Also applies to: 249-251, 370-373
26-33
: 🛠️ Refactor suggestion
Replace .expect()
with proper error handling
Using .expect()
may cause the application to panic unexpectedly. Consider handling the potential error from hc.add_entry
gracefully by returning a Result
and propagating the error.
Apply this diff to handle errors properly:
-let hc.add_entry(self.transaction.entry.clone())
- .expect("Adding transaction entry to hashchain should work");
+hc.add_entry(self.transaction.entry.clone())?;
Update the function signature of commit
to return Result<Transaction, Error>
accordingly.
132-134
: 🛠️ Refactor suggestion
Replace panic!
with proper error handling
Panicking when the service signing key is missing is not ideal. Instead, return a Result
to handle the absence of the service signing key gracefully.
Modify the code as follows:
-let Some(service_signing_key) = self.service_keys.get(service_id).cloned() else {
- panic!("No existing service found for {}", service_id)
-};
+let service_signing_key = self.service_keys.get(service_id).cloned()
+ .ok_or_else(|| anyhow!("No existing service found for {}", service_id))?;
Update the function signature of create_account_signed
to return Result<UncommittedTransaction, Error>
.
279-281
: 🛠️ Refactor suggestion
Replace panic!
with proper error handling
Panicking when the account signing key is missing may not be ideal. Instead, return a Result
to handle the absence of the account signing key gracefully.
Modify the code as follows:
-let Some(account_signing_key) = self.account_keys.get(id).cloned() else {
- panic!("No existing account key for {}", id)
-};
+let account_signing_key = self.account_keys.get(id).cloned()
+ .ok_or_else(|| anyhow!("No existing account key for {}", id))?;
Update the function signature of add_signed_data_verified_with_root
to return Result<UncommittedTransaction, Error>
.
Likely invalid or redundant comment.
Line range hint 157-173
: Consider securing the hash computation and signature process
When creating a new account, the hash is computed over the id
, service_id
, and the verifying key. Ensure that this process is secure and resistant to tampering.
Run the following script to verify that the hash computation is consistent across the codebase:
Ensure that the pattern matches only the intended usage and that there are no discrepancies.
✅ Verification successful
Hash computation for account creation is consistent and secure
The verification reveals that the hash computation for account creation is implemented securely and consistently:
- The hash computation in
transaction_builder.rs
matches the verification insnarkable_tree.rs
, both using the same inputs (id
,service_id
, and verifying key bytes) - The
Digest::hash_items
implementation uses a proper cryptographic hasher that processes all input items sequentially - The signature is computed over the resulting hash bytes, maintaining the cryptographic chain of trust
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent hash computation for account creation
# Search for all instances of Digest::hash_items usage with account creation
rg 'Digest::hash_items\(\[.*id.*service_id.*vk\.to_bytes.*\]\)'
Length of output: 65
Script:
#!/bin/bash
# Let's first search for all usages of Digest::hash_items to understand the hash computation patterns
rg "Digest::hash_items" -A 3
# Also search for any other hash computations related to account creation
rg "new_create_account" -A 5
# Let's also check the Digest implementation
ast-grep --pattern 'impl Digest {
$$$
}'
Length of output: 4206
fee1138
to
b3f159c
Compare
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
test_utils
module and associated structures.Tests