-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
katana: simplify database backend by removing in-memory provider #2571
Conversation
this will eventually be used to as the storage backend for in-memory provider, so as to simplify the logic. having to maintain completely different logic for multiple providers is a lot of work. eventually we should just use the database api abstraction for provider to storage interaction. therefore, the ephemeral database is initialized with the lowest durability configuration using [`MDBX_UTTERLY_NOSYNC`](https://erthink.github.io/libmdbx/group__c__opening.html#gga9138119a904355d245777c4119534061aec1ed5ccbf0626391088f787ffc4962c) to trade off durability for better write performance.
implement the [`Arbitray`](https://docs.rs/arbitrary/latest/arbitrary/trait.Arbitrary.html) trait for all types that are used in the database. this is for generating random values of all the database types. helpful for due to Rust's orphan rule, there are some types that we use from external crate (mainly `starknet-rs`) that don't implement the trait, so we can't directly derive it. to get around this, the simplest solution is to just own the types and define it ourselves. this approach requires that we do more marshalling between our primitives types to the rpc types (rpc we mostly use types from starknet-rs). which is not the most elegant solution. i think eventually we probably should not rely on external crate for our rpc types and maintain them ourselves for more flexibility (if the changes that we need cant be included upstream). the idea for this PR, is to use this for generating bunch of random values for all db types and use them in a benchmark. --- test db needs to be updated because of the we define the [`ExecutionResources`](https://github.com/dojoengine/dojo/blob/a4ee208b517daead41ac1a6b855af3abb03294c3/crates/katana/primitives/src/trace.rs#L12-L20) and it doesn't serde into the same format as the one we used before (from the [cairo-vm](https://github.com/dojoengine/dojo/blob/a4ee208b517daead41ac1a6b855af3abb03294c3/crates/katana/primitives/src/trace.rs#L12-L20)) i would prefer to keep the breaking serde, mainly because it doesnt de/serialize the `builtin_instance_counter` as raw strings of the builtin names. therefore more storage optimized. we're still using the builin name types from cairo-vm anyway so marshalling between them is straightforward as we dont need to convert the individual map entry. though this changes break the db format, as we already bumped it at #2560, and it hasnt been included in a release yet, theres no need to bump it again.
WalkthroughOhayo, sensei! This pull request encompasses extensive updates across various Changes
Possibly related PRs
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
🧹 Outside diff range and nitpick comments (36)
crates/katana/storage/db/src/models/block.rs (1)
Line range hint
7-15
: Consider documenting the Arbitrary implementation expectationsSince we're using arbitrary for test data generation, it would be helpful to document any assumptions or constraints about the generated values, especially for
tx_offset
andtx_count
.Add documentation like this:
#[cfg_attr(test, derive(::arbitrary::Arbitrary))] pub struct StoredBlockBodyIndices { + /// The offset in database of the first transaction in the block. + /// For testing: Arbitrary implementation generates values within the valid TxNumber range. /// /// `tx_offset` is a key of `Transactions` table. pub tx_offset: TxNumber, + /// The total number of transactions in the block. + /// For testing: Arbitrary implementation ensures this is a reasonable value. /// The total number of transactions in the block. pub tx_count: u64, }crates/katana/primitives/src/da/mod.rs (2)
7-16
: Ohayo! Consider enhancing the documentation for L1DataAvailabilityMode.The current documentation could be more descriptive to help other developers understand:
- The purpose of each variant (Blob vs Calldata)
- When to use each mode
- Any implications or constraints of choosing one over the other
Example improvement:
-/// L1 da mode. +/// Represents the data availability mode for Layer 1. +/// +/// # Variants +/// * `Blob` - Uses EIP-4844 blobs for data availability +/// * `Calldata` - Uses traditional calldata for data availability +/// +/// This enum determines how transaction data is stored and accessed on Layer 1.
18-26
: Documentation missing for DataAvailabilityMode enum.As this is a public enum that's part of the API, it should have comprehensive documentation explaining:
- The purpose of the enum
- When to use L1 vs L2 mode
- Any implications for the data availability strategy
Suggested addition:
+/// Specifies whether data availability is handled on Layer 1 or Layer 2. +/// +/// # Variants +/// * `L1` - Data availability is managed on Layer 1 +/// * `L2` - Data availability is managed on Layer 2 +/// +/// This enum is used to configure the data availability strategy for the system.crates/katana/storage/db/Cargo.toml (1)
40-40
: Consider documenting test-utils feature usageThe test-utils feature array is now empty. While this aligns with removing in-memory provider specific utilities, it might be worth documenting why this feature exists if it's still needed.
Consider adding a comment above the feature explaining its purpose:
+# Empty feature flag maintained for backward compatibility test-utils = [ ]
crates/katana/storage/db/src/models/contract.rs (2)
Line range hint
42-62
: Excellent parallel implementation, sensei!The ContractNonceChange follows the same robust pattern as ContractClassChange, maintaining consistency in the codebase. The arbitrary derive for testing is particularly valuable for property-based testing of nonce handling.
Consider adding a test that verifies the ordering preservation during compression/decompression of sequential nonces.
Line range hint
1-14
: Solid architectural foundation for the database transition!The design aligns perfectly with the PR's objective of simplifying the database backend. The use of BlockList for managing changes provides a clean abstraction layer.
Consider enhancing error handling by:
- Defining custom error types for compression/decompression failures
- Adding error context specific to contract operations
crates/katana/executor/tests/simulate.rs (1)
Line range hint
19-78
: Comprehensive test coverage, sensei! 🎯The test implementation demonstrates excellent practices:
- Thorough test cases using rstest's templating
- Proper validation of simulation results and fee estimation
- Comprehensive state verification ensuring no persistent changes
- Well-structured error cases with explicit should_panic tests
Consider adding test cases that specifically verify the behavior with the new database backend to ensure complete coverage of the architectural changes.
crates/katana/primitives/src/event.rs (1)
7-8
: Ohayo! Great addition of conditional derives, sensei!The addition of
serde
andarbitrary
derives is well-aligned with the PR's objective of transitioning to the mdbx backend. Theserde
support is essential for database serialization, whilearbitrary
enhances testing capabilities.Consider documenting these feature flags in the crate's README.md to help other developers understand when and why they might want to enable these features during development or testing.
crates/katana/executor/benches/concurrent.rs (1)
Line range hint
1-116
: Consider adding database-specific benchmark scenarios.Since we're transitioning to a database backend, it would be valuable to add specific benchmark scenarios that measure:
- Database connection pool behavior under concurrent load
- State access patterns with different transaction isolation levels
- Impact of database caching on performance
Would you like me to help create additional benchmark scenarios focusing on these database-specific aspects?
crates/saya/core/src/blockchain/mod.rs (1)
Line range hint
17-39
: Consider adding performance metrics, sensei!While the transition to DbProvider is architecturally sound, it would be beneficial to monitor the performance characteristics of the new implementation. Consider adding metrics collection for key operations defined in the Database trait.
Suggested metrics to monitor:
- Block retrieval latency
- State update duration
- Storage I/O patterns
crates/katana/storage/provider/tests/contract.rs (1)
Line range hint
119-123
: Fix template function signature to remove InMemoryProvider reference.Ohayo sensei! There's an inconsistency in the test template. While the actual test implementations use
ForkedProvider
andDbProvider
, the template function still referencesInMemoryProvider
.Apply this change to align with the PR objectives:
fn test_historical_storage_read( - #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>, + #[from(provider_with_states)] provider: BlockchainProvider<impl StateFactoryProvider>, #[case] block_num: BlockNumber, #[case] expected_contract_info: Vec<(ContractAddress, Option<ClassHash>, Option<Nonce>)>, ) {crates/katana/primitives/src/version.rs (1)
4-4
: Ohayo! The TODO comment can be removed, sensei.The TODO comment about figuring out the exact format of the version string appears to be outdated. The format is well-defined in the code and thoroughly tested: it follows the
x.y.z.w
pattern where each segment is au8
, and the implementation handles partial versions by zero-filling missing segments.-// TODO: figure out the exact format of the version string.
crates/katana/storage/db/src/lib.rs (3)
55-70
: Ohayo sensei! Excellent implementation of ephemeral DB initialization!The implementation elegantly solves the goal of simplifying the codebase by unifying the storage backend. The trade-off between durability and performance is well-documented, making it clear when to use this vs.
init_db
.Consider adding metrics/logging to track ephemeral DB usage patterns in production, which could help identify potential memory pressure or performance bottlenecks.
147-176
: Solid test coverage for ephemeral DB lifecycle, sensei!The test effectively verifies the reference counting and cleanup behavior. Consider adding these edge cases:
- Concurrent access to the same ephemeral DB
- Error handling when directory permissions prevent deletion
- Behavior when the directory is manually deleted while DB is still in use
Would you like me to help generate the additional test cases?
Ohayo sensei! After reviewing the latest script outputs, here's the verification status:
In-Memory Provider References Partially Removed - Action Required
Test Files Still Reference
InMemoryProvider
:
crates/katana/storage/provider/tests/class.rs
crates/katana/storage/provider/tests/storage.rs
crates/katana/storage/provider/tests/contract.rs
in_memory
Module Structure Exists Without Provider Implementations:
- Files remain in
crates/katana/storage/provider/src/providers/in_memory/
but no longer implement theProvider
trait.Recommended Actions:
- Update the test files to remove or replace
InMemoryProvider
references with the new ephemeral DB implementation.- Consider removing the
in_memory
module if it's no longer necessary for utility purposes.🔗 Analysis chain
Line range hint
55-176
: Verify complete removal of in-memory provider references.The implementation successfully replaces the in-memory provider with an ephemeral DB. Let's verify there are no lingering references.
Let me check the content of these files to understand if these are actual remaining references or test files that need updating.
Ohayo sensei! After analyzing the results, I can now provide a clear response about the in-memory provider references.
Some test and fork-related files still reference InMemoryProvider - action needed
The references found are in two categories:
- Test files (
class.rs
,storage.rs
,contract.rs
) still useInMemoryProvider
in their test signatures- Fork provider implementation still imports and uses some
in_memory
module components (cache and state utilities)Let me check if these are the only remaining references to the in-memory module structure:
Ohayo sensei! Let me check the actual content of the in_memory module to understand if it's still being used as a provider or just for shared utilities:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to in-memory provider # Test: Search for InMemoryProvider references echo "Checking for InMemoryProvider references..." rg -l "InMemoryProvider" # Test: Search for in-memory related types/traits echo "Checking for in-memory related types..." rg -l "in_memory::"Length of output: 550
Script:
#!/bin/bash # Check the content of files with InMemoryProvider references echo "=== Content of storage/provider/tests/class.rs ===" rg -C 5 "InMemoryProvider" crates/katana/storage/provider/tests/class.rs echo -e "\n=== Content of storage/provider/tests/storage.rs ===" rg -C 5 "InMemoryProvider" crates/katana/storage/provider/tests/storage.rs echo -e "\n=== Content of storage/provider/tests/contract.rs ===" rg -C 5 "InMemoryProvider" crates/katana/storage/provider/tests/contract.rs echo -e "\n=== Checking in_memory:: usages ===" rg -C 5 "in_memory::" crates/katana/storage/provider/src/providers/fork/Length of output: 5650
Script:
#!/bin/bash # Check for the in_memory module structure echo "=== Checking for in_memory module files ===" fd "." crates/katana/storage/provider/src/providers/in_memory/ # Check for any other imports of in_memory echo -e "\n=== Checking for other in_memory imports ===" rg -l "use.*in_memory" --type rustLength of output: 732
Script:
#!/bin/bash # Check the content of in_memory module echo "=== Content of in_memory/mod.rs ===" cat crates/katana/storage/provider/src/providers/in_memory/mod.rs echo -e "\n=== Checking if cache.rs and state.rs are now utility modules ===" # Look for provider trait implementations in these files rg "impl.*Provider" crates/katana/storage/provider/src/providers/in_memory/cache.rs rg "impl.*Provider" crates/katana/storage/provider/src/providers/in_memory/state.rsLength of output: 497
crates/katana/storage/provider/tests/storage.rs (1)
Line range hint
143-146
: Fix test template parameter type.Ohayo sensei! There's an inconsistency in the test template. The provider parameter still references
InMemoryProvider
type, but this provider is being removed as part of this PR.Apply this fix to align with the PR's objective:
fn test_historical_storage_read( - #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>, + #[from(provider_with_states)] provider: BlockchainProvider<impl StateFactoryProvider>, #[case] block_num: BlockNumber, #[case] expected_storage_entry: Vec<(ContractAddress, StorageKey, Option<StorageValue>)>, ) {This change makes the test template generic over any provider that implements
StateFactoryProvider
, which is more flexible and aligns with the removal of the in-memory provider.crates/saya/provider/src/rpc/mod.rs (1)
108-115
: Ohayo! Consider extracting the conversion logic, sensei!While the current implementation is correct and type-safe, we could improve maintainability by moving this conversion logic to a dedicated conversion module, similar to how we handle other type conversions in the codebase.
If you're using the
derive_more
crate, you could also consider adding#[derive(From)]
to simplify this conversion.Example implementation in a conversion module:
// in katana_primitives/src/conversion/da.rs impl From<starknet::core::types::L1DataAvailabilityMode> for L1DataAvailabilityMode { fn from(mode: starknet::core::types::L1DataAvailabilityMode) -> Self { match mode { starknet::core::types::L1DataAvailabilityMode::Blob => Self::Blob, starknet::core::types::L1DataAvailabilityMode::Calldata => Self::Calldata, } } } // Then in your current file: l1_da_mode: block.l1_da_mode.into(),crates/katana/primitives/src/block.rs (1)
Line range hint
35-41
: Ohayo sensei! The FinalityStatus enum looks good, but could use some docs.The enum is well-structured and derives appropriate traits. Consider adding documentation to explain when each variant is used.
Add rustdoc comments like this:
/// Finality status of a canonical block. +/// +/// - `AcceptedOnL2`: Block has been accepted by the L2 network +/// - `AcceptedOnL1`: Block has been accepted by both L2 and L1 networks #[derive(Debug, Copy, Clone, PartialEq, Eq)] #[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] pub enum FinalityStatus {crates/katana/rpc/rpc-types/src/receipt.rs (1)
212-217
: Clean implementation, but could use documentation, sensei!The price unit conversion logic is well-implemented with exhaustive pattern matching. Consider adding a brief doc comment to explain the conversion purpose.
+/// Converts internal fee information to RPC-compatible fee payment format, +/// handling both Wei and Fri price units. fn to_rpc_fee(fee: TxFeeInfo) -> FeePayment { let unit = match fee.unit { PriceUnit::Wei => starknet::core::types::PriceUnit::Wei, PriceUnit::Fri => starknet::core::types::PriceUnit::Fri, };crates/katana/rpc/rpc-types/src/block.rs (2)
Line range hint
246-246
: Potential bug: Hardcoded l1_da_mode in BlockWithReceiptsUnlike other implementations that properly convert the header's
l1_da_mode
, this implementation hardcodes it toCalldata
. This seems inconsistent and might be a bug.- l1_da_mode: L1DataAvailabilityMode::Calldata, + l1_da_mode: match header.l1_da_mode { + katana_primitives::da::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, + katana_primitives::da::L1DataAvailabilityMode::Calldata => L1DataAvailabilityMode::Calldata, + },
Line range hint
1-300
: Consider additional structural improvements for better code reuseThe codebase has several patterns that could benefit from helper functions:
- Resource price conversions are repeated across implementations
- Status conversions appear in multiple places
- Common struct field mappings could be extracted
Would you like me to provide example implementations for these helper functions, sensei?
crates/katana/executor/tests/fixtures/mod.rs (1)
Line range hint
73-84
: Consider adding documentation about the ephemeral nature, sensei!While the transition to DbProvider is clean, it would be helpful to document that we're using an ephemeral database that's perfect for testing. This helps other developers understand the test isolation guarantees.
Add a comment like this above line 73:
+ // Using ephemeral database that's automatically cleaned up after tests let provider = DbProvider::new_ephemeral();
crates/katana/storage/db/src/tables.rs (2)
Line range hint
367-374
: Ohayo sensei! Consider enhancing test coverage for PriceUnit.While the current test case validates the
Wei
unit, it would be beneficial to add test coverage for theFri
unit as well to ensure complete validation of the fee structure serialization.Here's a suggested addition to the test cases:
assert_value_compress_decompress! { (Receipt, Receipt::Invoke(InvokeTxReceipt { revert_error: None, events: Vec::new(), messages_sent: Vec::new(), execution_resources: Default::default(), fee: TxFeeInfo { gas_consumed: 0, gas_price: 0, overall_fee: 0, unit: PriceUnit::Wei }, })), + (Receipt, Receipt::Invoke(InvokeTxReceipt { + revert_error: None, + events: Vec::new(), + messages_sent: Vec::new(), + execution_resources: Default::default(), + fee: TxFeeInfo { gas_consumed: 0, gas_price: 0, overall_fee: 0, unit: PriceUnit::Fri }, + })) }
Line range hint
1-374
: Ohayo sensei! The database schema changes look solid.The integration of
PriceUnit
and the overall table structure effectively supports the transition from in-memory provider to MDBX backend. The schema maintains proper separation of concerns and includes comprehensive test coverage. This change aligns well with the PR objective of simplifying the database backend.crates/katana/rpc/rpc/src/starknet/mod.rs (1)
116-119
: Ohayo! Clean unit conversion, with room for enhancement, sensei!The PriceUnit conversion is correct and exhaustive. However, consider extracting this conversion into a trait implementation for better maintainability:
// In a separate traits module trait IntoPriceUnit { fn into_price_unit(self) -> PriceUnit; } impl IntoPriceUnit for katana_primitives::fee::PriceUnit { fn into_price_unit(self) -> PriceUnit { match self { Self::Wei => PriceUnit::Wei, Self::Fri => PriceUnit::Fri, } } } // In the current file -unit: match fee.unit { - katana_primitives::fee::PriceUnit::Wei => PriceUnit::Wei, - katana_primitives::fee::PriceUnit::Fri => PriceUnit::Fri, -}, +unit: fee.unit.into_price_unit(),crates/katana/primitives/src/chain_spec.rs (1)
Line range hint
233-600
: Consider optimizing fee token storage calculationsThe current implementation in the test file calculates total supply by iterating over allocations multiple times. Consider pre-calculating the total supply once and reusing it.
Here's a suggested optimization:
fn add_fee_token( states: &mut StateUpdatesWithDeclaredClasses, name: &str, symbol: &str, decimals: u8, address: ContractAddress, class_hash: ClassHash, allocations: &BTreeMap<ContractAddress, GenesisAllocation>, ) { let mut storage = BTreeMap::new(); - let mut total_supply = U256::ZERO; + // Pre-calculate total supply + let total_supply: U256 = allocations + .values() + .filter_map(|alloc| alloc.balance()) + .sum(); + let (total_supply_low, total_supply_high) = split_u256(total_supply); // --- set the ERC20 balances for each allocations that have a balance for (address, alloc) in allocations { if let Some(balance) = alloc.balance() { - total_supply += balance; let (low, high) = split_u256(balance); // ... rest of the balance storage logic } } // --- ERC20 metadata let name = cairo_short_string_to_felt(name).unwrap(); let symbol = cairo_short_string_to_felt(symbol).unwrap(); let decimals = decimals.into(); - let (total_supply_low, total_supply_high) = split_u256(total_supply); storage.insert(ERC20_NAME_STORAGE_SLOT, name); // ... rest of the metadata storage logiccrates/katana/storage/provider/src/providers/db/mod.rs (1)
786-786
: Consider adding test coverage for the new ephemeral provider.Ohayo, sensei! While the test utilities have been updated, I notice there's no explicit test coverage for the new
new_ephemeral()
method. Consider adding a test case to verify the ephemeral database behavior.Here's a suggested test case:
#[test] fn test_ephemeral_provider() { let provider = DbProvider::new_ephemeral(); // Verify basic operations let block = create_dummy_block(); let state_updates = create_dummy_state_updates(); BlockWriter::insert_block_with_states_and_receipts( &provider, block.clone(), state_updates, vec![/* ... */], vec![TxExecInfo::default()], ).expect("failed to insert block"); // Verify data was stored assert!(provider.latest_number().is_ok()); }Also applies to: 860-860
crates/katana/primitives/src/fee.rs (1)
5-5
: Use 'transaction' instead of 'tx' in documentation commentsFor clarity and professionalism, consider replacing "tx" with "transaction" in the documentation comments.
Apply this diff to make the change:
/// The max amount of the resource that can be used in the tx + /// The max amount of the resource that can be used in the transaction pub max_amount: u64, /// The max price per unit of this resource for this tx + /// The max price per unit of this resource for this transaction pub max_price_per_unit: u128,Also applies to: 7-7
crates/katana/primitives/src/trace.rs (1)
Line range hint
83-107
: Ohayo sensei! Consider using anExecutionStatus
enum for richer error handlingReplacing the
failed: bool
field with an enum provides more expressive and detailed execution outcomes.You could define an enum like this:
pub enum ExecutionStatus { Success, Reverted(String), // Include revert error message }And update
CallInfo
accordingly:pub struct CallInfo { // ... other fields ... - pub failed: bool, + pub execution_status: ExecutionStatus, }This change allows you to capture not only success or failure but also provide additional context on failures.
crates/saya/provider/src/rpc/transaction.rs (1)
50-55
: Ohayo, sensei! Consider adding unit tests for the new transformation functions.The functions
from_rpc_da_mode
andfrom_rpc_resource_bounds
are critical for accurate data conversion. Adding unit tests will help ensure they handle all possible input scenarios correctly.Would you like me to assist in creating unit tests for these functions?
crates/katana/storage/db/src/mdbx/mod.rs (3)
240-241
: Unused import:use crate::init_ephemeral_db
Ohayo, sensei! The import
use crate::init_ephemeral_db;
appears to be unused in this module.Consider removing it to keep the code clean.
- use crate::init_ephemeral_db;
Line range hint
298-310
: Handle errors explicitly indb_stats
test functionOhayo, sensei! In the
db_stats
test function, errors are not being handled explicitly, which could make debugging difficult if the test fails.Consider using
expect
or proper error handling to provide clear failure messages.- let env = create_test_db(); + let env = create_test_db().expect("Failed to create test database");
Line range hint
407-425
: Ensure consistency in error messagesSensei, the error messages used in the tests are inconsistent, which might cause confusion when debugging.
Consider standardizing the error messages for clarity.
crates/katana/primitives/src/utils/transaction.rs (2)
Line range hint
283-295
: Ohayo sensei! Optimizingencode_gas_bound
Buffer InitializationIn
encode_gas_bound
, you initialize a 32-byte buffer and then split it into segments forgas_kind
,max_amount
, andmax_price
. Consider initializing the buffer with the exact size needed for clarity and potential performance benefits.You might adjust the buffer initialization and usage as follows:
-fn encode_gas_bound(name: &[u8], bound: &ResourceBounds) -> Felt { - let mut buffer = [0u8; 32]; - let (remainder, max_price) = buffer.split_at_mut(128 / 8); +fn encode_gas_bound(name: &[u8], bound: &ResourceBounds) -> Felt { + let mut buffer = [0u8; 16]; + let (remainder, max_price) = buffer.split_at_mut(8);
Line range hint
397-403
: Ohayo sensei! Unit Tests Needed for New FunctionalityWhile you've added substantial changes to transaction hashing functions, there are no corresponding unit tests to verify their correctness with the new parameters. Adding tests will ensure reliability and prevent regressions.
Would you like me to help create unit tests for these updated functions?
crates/katana/rpc/rpc-types/src/transaction.rs (1)
536-541
: Address the TODO Regarding Type ConversionOhayo, sensei! There's a TODO comment highlighting the need to find a better solution for type conversion due to the orphan rule. Would you like assistance in exploring alternatives, such as defining custom traits or using wrapper types, to avoid relying heavily on
starknet-rs
RPC types?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (51)
- Cargo.toml (2 hunks)
- crates/katana/cairo/Cargo.toml (1 hunks)
- crates/katana/core/src/backend/storage.rs (4 hunks)
- crates/katana/executor/benches/concurrent.rs (1 hunks)
- crates/katana/executor/benches/execution.rs (1 hunks)
- crates/katana/executor/src/implementation/blockifier/state.rs (3 hunks)
- crates/katana/executor/src/implementation/blockifier/utils.rs (5 hunks)
- crates/katana/executor/tests/fixtures/mod.rs (3 hunks)
- crates/katana/executor/tests/simulate.rs (1 hunks)
- crates/katana/node/Cargo.toml (0 hunks)
- crates/katana/node/src/lib.rs (1 hunks)
- crates/katana/primitives/Cargo.toml (3 hunks)
- crates/katana/primitives/src/block.rs (3 hunks)
- crates/katana/primitives/src/chain.rs (2 hunks)
- crates/katana/primitives/src/chain_spec.rs (2 hunks)
- crates/katana/primitives/src/contract.rs (2 hunks)
- crates/katana/primitives/src/da/mod.rs (1 hunks)
- crates/katana/primitives/src/event.rs (1 hunks)
- crates/katana/primitives/src/fee.rs (1 hunks)
- crates/katana/primitives/src/lib.rs (1 hunks)
- crates/katana/primitives/src/message.rs (1 hunks)
- crates/katana/primitives/src/receipt.rs (7 hunks)
- crates/katana/primitives/src/trace.rs (7 hunks)
- crates/katana/primitives/src/transaction.rs (21 hunks)
- crates/katana/primitives/src/utils/transaction.rs (1 hunks)
- crates/katana/primitives/src/version.rs (1 hunks)
- crates/katana/rpc/rpc-types/src/block.rs (4 hunks)
- crates/katana/rpc/rpc-types/src/receipt.rs (2 hunks)
- crates/katana/rpc/rpc-types/src/transaction.rs (10 hunks)
- crates/katana/rpc/rpc/src/starknet/mod.rs (2 hunks)
- crates/katana/rpc/rpc/src/starknet/trace.rs (2 hunks)
- crates/katana/storage/db/Cargo.toml (3 hunks)
- crates/katana/storage/db/src/lib.rs (3 hunks)
- crates/katana/storage/db/src/mdbx/mod.rs (16 hunks)
- crates/katana/storage/db/src/models/block.rs (1 hunks)
- crates/katana/storage/db/src/models/contract.rs (2 hunks)
- crates/katana/storage/db/src/models/storage.rs (3 hunks)
- crates/katana/storage/db/src/tables.rs (1 hunks)
- crates/katana/storage/provider/src/providers/db/mod.rs (4 hunks)
- crates/katana/storage/provider/src/providers/in_memory/mod.rs (0 hunks)
- crates/katana/storage/provider/src/providers/in_memory/state.rs (1 hunks)
- crates/katana/storage/provider/src/test_utils.rs (2 hunks)
- crates/katana/storage/provider/tests/block.rs (1 hunks)
- crates/katana/storage/provider/tests/class.rs (1 hunks)
- crates/katana/storage/provider/tests/contract.rs (1 hunks)
- crates/katana/storage/provider/tests/fixtures.rs (2 hunks)
- crates/katana/storage/provider/tests/storage.rs (1 hunks)
- crates/katana/storage/provider/tests/utils.rs (1 hunks)
- crates/saya/core/src/blockchain/mod.rs (2 hunks)
- crates/saya/provider/src/rpc/mod.rs (2 hunks)
- crates/saya/provider/src/rpc/transaction.rs (5 hunks)
💤 Files with no reviewable changes (2)
- crates/katana/node/Cargo.toml
- crates/katana/storage/provider/src/providers/in_memory/mod.rs
🧰 Additional context used
🔇 Additional comments (115)
crates/katana/primitives/src/message.rs (1)
5-7
: Ohayo! Clean and strategic attribute additions, sensei!The new conditional compilation attributes align perfectly with the PR's objective of transitioning to a database backend:
- The
serde
feature enables seamless database serialization- The
arbitrary
feature enhances testing capabilities through property-based testingThis is a well-thought-out change that maintains clean separation of concerns through feature flags.
crates/katana/primitives/src/lib.rs (1)
26-26
: Ohayo! LGTM, but let's verify the import consistency across the codebase, sensei!The change to import
Felt
andFromStrError
fromstarknet_types_core::felt
looks good and aligns with the codebase simplification objective.Let's verify that this change is consistent across the codebase:
crates/katana/storage/db/src/models/block.rs (1)
7-7
: Ohayo! Nice addition of test capabilities, sensei!The
#[cfg_attr(test, derive(::arbitrary::Arbitrary))]
attribute is a great addition that will enhance testing capabilities by enabling automatic generation of test data forStoredBlockBodyIndices
. This aligns well with the transition to a database-backed system.crates/katana/primitives/src/da/mod.rs (1)
Line range hint
1-26
: Ohayo sensei! Clean and well-structured implementation.The implementation aligns well with the PR's objective of simplifying the database backend:
- Clear separation of L1 and general data availability modes
- Appropriate use of feature flags for optional functionality
- Consistent implementation patterns across both enums
crates/katana/storage/db/Cargo.toml (3)
10-10
: Ohayo! The arbitrary feature addition looks good, sensei!Adding the arbitrary feature to katana-primitives aligns well with the transition to mdbx backend, enabling better testing capabilities.
33-33
: LGTM! Arbitrary dependency addition is well-placed, sensei!Adding arbitrary to dev-dependencies is appropriate for generating test cases, especially with the removal of in-memory provider.
20-20
: Verify tempfile usage in core functionalityMoving tempfile from dev-dependencies to main dependencies suggests it's now used in core functionality rather than just tests.
✅ Verification successful
Let me search for actual tempfile usage patterns to get a better understanding.
This will help us understand:
- How tempfile is actually being used in the code
- Which crates have tempfile as a dependency
- Whether it's used in core functionality or just for testing
The previous search might have missed some usages since we were only looking for import statements. This broader search should give us a more complete picture.
Ohayo sensei! After analyzing the codebase, I can now provide a clear response about the tempfile dependency.
Moving tempfile to main dependencies is correct and necessary
The search results show that tempfile is indeed used in core functionality in the
katana/storage/db
crate, specifically in:
src/mdbx/mod.rs
: Creating temporary directories for database operationssrc/lib.rs
: Multiple uses for database initialization and version file handlingThese are part of the main functionality, not just tests, making it necessary to have tempfile as a main dependency rather than just a dev-dependency.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for tempfile usage in main code vs tests echo "Main code usage:" rg "use.*tempfile" --type rust --glob '!*/tests/*' --glob '!*/benches/*' echo -e "\nTest code usage:" rg "use.*tempfile" --type rust --glob '*/tests/*' --glob '*/benches/*'Length of output: 891
Script:
#!/bin/bash # Search for actual tempfile usage patterns echo "Searching for tempfile usage patterns in source code:" rg "tempfile::" --type rust -A 2 echo -e "\nSearching for tempfile in Cargo.toml files:" fd Cargo.toml --exec grep -l "tempfile"Length of output: 5106
crates/katana/cairo/Cargo.toml (2)
23-26
: Ohayo! Clean feature addition with excellent documentation.The new
cairo-vm-test-utils
feature is well-documented and follows Rust's best practices for optional test utilities.
23-26
: Verify the necessity of this feature for the database backend transition, sensei.While the feature addition is clean, could you clarify how this test utility feature supports the PR's main objective of removing the in-memory provider? This will help ensure we're not introducing unnecessary complexity.
Let's check if this feature is used in the modified files:
✅ Verification successful
Let me gather more context about the test utilities feature and its relationship with the database transition.
This will help us understand if the test utilities are specifically being used for database-related testing.
Ohayo, sensei! Based on the search results, I can now provide a clear response about the necessity of this feature.
The
cairo-vm-test-utils
feature is indeed necessary for the database transition, sensei!The feature is required because:
- The
katana-primitives
crate uses thearbitrary
feature for testing database models- There's an explicit dependency chain where
katana-primitives
needs thecairo-vm-test-utils
feature to properly deriveArbitrary
for types that come fromcairo-vm
- This is evidenced by the comment in
primitives/Cargo.toml
that references the feature:"katana-cairo/cairo-vm-test-utils"
The feature directly supports testing the new database models and primitives, which is crucial for ensuring the reliability of the database backend transition.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for usage of the new feature in modified files rg --type rust "cairo-vm.*test_utils|arbitrary"Length of output: 5625
Script:
#!/bin/bash # Check if the feature is used in test files within the storage/db crate rg --type rust -g "*/storage/db/**/*test*.rs" "cairo-vm.*test_utils|arbitrary" # Check Cargo.toml files for dependencies and features fd Cargo.toml -x cat {} \; | grep -A 5 -B 5 "arbitrary\|test_utils"Length of output: 3431
crates/katana/primitives/Cargo.toml (3)
10-11
: LGTM! New dependencies align with project objectives.Ohayo sensei! The addition of these dependencies makes sense given the transition from in-memory to database-backed providers. The optional flag on
arbitrary
is a good practice for test-related dependencies.Also applies to: 13-13, 24-24
27-27
: LGTM! Proper feature configuration for alloy-primitives.The explicit feature configuration for
alloy-primitives
witharbitrary
feature is well-structured and maintains consistency with the new arbitrary testing capabilities.
40-45
: Verify the necessity of cairo-vm-test-utils.The arbitrary feature configuration looks good, but there's a reference to a comment in katana-cairo/Cargo.toml about why cairo-vm-test-utils is needed. Let's verify this dependency.
✅ Verification successful
Let me gather more context about this dependency.
Ohayo sensei! I can now provide a clear verification of this dependency.
The cairo-vm-test-utils dependency is correctly used and necessary
The comment in katana-cairo/Cargo.toml explains why this dependency is needed: some types from cairo-vm implement the
Arbitrary
trait only under thetest_utils
feature. Since this is part of thearbitrary
feature configuration in primitives/Cargo.toml, it's a required dependency to ensure proper trait implementations are available.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for the referenced comment explaining the cairo-vm-test-utils dependency # Search for comments about cairo-vm-test-utils in katana-cairo's Cargo.toml rg -p "cairo-vm-test-utils" "crates/katana/cairo/Cargo.toml"Length of output: 109
Script:
#!/bin/bash # Let's check for any comments or context around the cairo-vm-test-utils in both files echo "=== Checking katana-cairo/Cargo.toml with context ===" rg -p -B 3 -A 3 "cairo-vm-test-utils" "crates/katana/cairo/Cargo.toml" echo -e "\n=== Checking primitives/Cargo.toml with context ===" rg -p -B 3 -A 3 "cairo-vm-test-utils" "crates/katana/primitives/Cargo.toml" # Let's also check if there are any tests or code using this dependency echo -e "\n=== Checking usage of cairo-vm-test-utils across the codebase ===" rg "cairo-vm-test-utils"Length of output: 1141
crates/katana/executor/benches/execution.rs (1)
21-21
: Ohayo sensei! The provider change looks good to me.This change aligns perfectly with the PR objective of transitioning away from the in-memory provider to simplify the codebase. Using the database-backed provider from
test_utils::test_provider()
instead oftest_utils::test_in_memory_provider()
helps standardize the provider interactions across the benchmarks and tests.Keep up the great work in coordinating this transition! The codebase will be much cleaner and easier to maintain once all the in-memory provider usage is replaced.
crates/katana/primitives/src/contract.rs (2)
19-20
: Ohayo! Nice addition of conditional derives, sensei!The addition of
arbitrary
andserde
derives enhances the ContractAddress type's testing and serialization capabilities, which aligns well with the database backend standardization goals.
Line range hint
76-83
: Clean and well-documented struct implementation, sensei!The new
GenericContractInfo
struct is well-structured with:
- Clear documentation for each field
- Appropriate derives for testing and serialization
- Essential contract state fields (nonce and class_hash)
This addition provides a solid foundation for the database backend transition.
Let's verify the usage of this new struct:
✅ Verification successful
Based on the search results, I can generate the final response as we have comprehensive information about the usage of
GenericContractInfo
.Ohayo! Your
GenericContractInfo
is well-integrated across the codebase, sensei!The struct is properly used throughout the codebase:
- Used in database tables (
tables.rs
) for contract info storage- Integrated in both DB and in-memory providers
- Properly handled in state management with appropriate default implementations
- Well-tested in the MDBX implementation
- Documented in the database schema (
database.md
)The implementation is consistent across all usages and follows the intended purpose for the database backend transition.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any existing implementations or usage of GenericContractInfo rg -A 3 "GenericContractInfo"Length of output: 8919
crates/katana/storage/db/src/models/contract.rs (1)
Line range hint
15-35
: Ohayo! Clean implementation of ContractClassChange!The struct and its implementations look solid, with efficient buffer handling and proper fixed-size encoding for contract addresses.
Let's verify the class hash compression implementation:
✅ Verification successful
The previous pattern might have been too specific. Let's gather more context about ClassHash implementations and its compression usage to verify the consistency of the implementation.
Let's verify how ClassHash implements the Compress trait and its usage across the codebase to ensure the implementation in ContractClassChange is consistent with the expected behavior.
Let's try to locate the ClassHash type definition and its trait implementations, particularly focusing on the Compress trait, to verify the implementation's consistency.
Let's verify the Compress trait definition and its implementations to ensure the usage in ContractClassChange is consistent with the codebase's compression patterns, ohayo sensei!
Ohayo sensei! After analyzing the implementation of the Compress trait and its usage across the codebase, I can now provide a definitive response about the ContractClassChange implementation.
The implementation is consistent with the codebase's compression patterns!
The implementation follows the same pattern as other structs in the codebase:
- Uses
Vec<u8>
as theCompressed
type- Creates a buffer and extends it with encoded/compressed fields
- Follows the same pattern seen in similar structs like
ContractStorageEntry
andContractNonceChange
The use of
class_hash.compress()
is correct as it's consistent with how other fields are handled in similar implementations throughout the codebase, particularly in the storage and contract models.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify consistent ClassHash compression usage across the codebase ast-grep --pattern 'impl $_ for ClassHash { $$$ fn compress($_) -> $_ { $$$ } }'Length of output: 92
Script:
#!/bin/bash # Let's try a different approach to verify ClassHash implementations # First, find where ClassHash is defined rg "struct ClassHash" -A 5 # Then check for any compression-related implementations rg "impl.*ClassHash" -A 10 # Also check for any direct usage of ClassHash compression rg "class_hash\.compress\(\)"Length of output: 157247
Script:
#!/bin/bash # Let's check the implementation of ClassHash's Compress trait rg "impl.*Compress.*for.*ClassHash" -A 10 # Also check where ClassHash is defined and its other traits rg "struct ClassHash" -A 5 rg "impl.*for ClassHash" -A 10 # Check for any other uses of compress() with ClassHash rg "class_hash.*compress\(\)"Length of output: 253
Script:
#!/bin/bash # Let's try a different approach to find ClassHash definition and its traits # First, find files that might contain ClassHash definition fd -e rs -e cairo | xargs rg "ClassHash" -l # Then check the content of these files for ClassHash implementation fd -e rs -e cairo | xargs rg "(struct|impl.*for)\s+ClassHash" -A 10 # Also check for any imports or uses related to ClassHash and Compress rg "use.*ClassHash" rg "use.*Compress"Length of output: 165271
Script:
#!/bin/bash # Let's check the implementation of Compress trait and its usage in the codebase rg "trait\s+Compress" -A 5 rg "impl\s+Compress\s+for" -A 10 # Also check the specific implementation in codecs module fd -e rs | xargs rg "mod codecs" -A 10Length of output: 8205
crates/katana/storage/provider/tests/utils.rs (2)
2-2
: Ohayo! Import change aligns with project direction.The switch from
starknet::core::types::PriceUnit
tokatana_primitives::fee::{PriceUnit, TxFeeInfo}
aligns well with the PR's objective of standardizing the backend implementation, sensei! This change promotes better modularity by keeping fee-related types within the project's primitive types.
Line range hint
24-24
: Fee structure update looks good!The explicit initialization of
TxFeeInfo
withPriceUnit::Wei
in the test utilities ensures consistency with the new fee handling mechanism. This change maintains test coverage while adapting to the updated architecture.Let's verify the consistent usage of the new fee structure:
✅ Verification successful
Let me gather more information about the fee structure usage to ensure a comprehensive verification.
This will help us understand the complete picture of fee handling across the codebase, including how TxFeeInfo is being used and if there are any inconsistencies in the fee structure implementation.
Ohayo sensei! After analyzing the results, I can now provide a conclusive response about the fee structure verification.
Fee structure implementation is consistent across the codebase! ✨
The verification shows that:
- The new
PriceUnit
fromkatana_primitives::fee
is consistently used across the codebase- All fee-related structures (
TxFeeInfo
,FeePayment
) properly initialize thePriceUnit
enum- The test utilities correctly use
PriceUnit::Wei
in alignment with the production code- There are proper conversions between different fee types in RPC layers
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify consistent usage of PriceUnit across the codebase # Expect: All fee-related structures should use the new PriceUnit from katana_primitives # Check for any remaining references to the old PriceUnit rg "use starknet::core::types::PriceUnit" # Verify the adoption of the new PriceUnit rg "use katana_primitives::fee::PriceUnit"Length of output: 168
Script:
#!/bin/bash # Search for PriceUnit enum usage and its variants ast-grep --pattern 'PriceUnit::$_' # Search for TxFeeInfo struct initialization or usage rg "TxFeeInfo" -A 2 # Look for any fee-related structs and their initialization rg "fee::" -A 2Length of output: 17035
crates/katana/storage/provider/src/test_utils.rs (2)
4-4
: Ohayo! Clean import addition for mdbx test utilities.The new import aligns perfectly with our mission to transition to the mdbx backend, sensei! 🎯
21-24
: Excellent simplification of the test provider interface, sensei! 🌟The transition from
test_in_memory_provider
to a generictest_provider
usingDbProvider
aligns perfectly with our goal of simplifying the backend. However, let's verify the impact across our test suite.✅ Verification successful
Ohayo sensei! Your test provider changes are well-handled with proper cleanup! 🧹
The verification shows that the database cleanup is properly implemented through the
Drop
trait forDbEnv
. When the test database is dropped, it automatically removes the temporary directory if it's ephemeral (which is the case for test databases). This is implemented incrates/katana/storage/db/src/mdbx/mod.rs
:impl Drop for DbEnv { fn drop(&mut self) { if let Some(inner) = Arc::get_mut(&mut self.inner) { if inner.ephemeral { // Cleanup temporary directory if let Err(e) = std::fs::remove_dir_all(&inner.dir) { eprintln!("Failed to remove temporary directory: {e}"); } } } } }The test provider is being used correctly in:
- Executor benchmarks
- Blockifier state tests
- Database tests
All usages properly leverage the automatic cleanup through Rust's RAII pattern, ensuring no test database artifacts are left behind.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the migration from test_in_memory_provider to test_provider # and ensure proper test database cleanup # Check for any remaining references to the old provider echo "Checking for any remaining references to test_in_memory_provider..." rg "test_in_memory_provider" -l # Look for test files using the new provider echo "Checking usage of new test_provider function..." rg "test_provider\(\)" --type rust -l # Check for proper database cleanup patterns echo "Checking for potential database cleanup patterns..." rg "(?i)(cleanup|drop|delete).*test.*db" --type rust -A 3Length of output: 694
Script:
#!/bin/bash # Let's verify the test database cleanup patterns more thoroughly # Check for test database initialization patterns echo "Checking test database initialization patterns..." rg "create_test_db|test_provider" --type rust -A 5 # Check for Drop implementations that might handle cleanup echo "Checking Drop implementations in database-related files..." ast-grep --pattern 'impl Drop for $_ { $$$ }' # Check for explicit cleanup or teardown in tests echo "Checking test modules for cleanup patterns..." rg "#\[test\]" --type rust -A 10 -B 2 | rg -A 10 "(?i)(cleanup|tear|drop|delete|after)"Length of output: 15317
crates/katana/executor/tests/simulate.rs (1)
8-8
: Ohayo sensei! Import change aligns with architectural improvements.The switch from
starknet::core::types::PriceUnit
tokatana_primitives::fee::PriceUnit
aligns well with the PR's objective of standardizing components and simplifying the backend architecture.crates/katana/primitives/src/event.rs (1)
Line range hint
7-13
: Verify database integration and test coverage.Let's ensure the new derives are properly integrated:
✅ Verification successful
Let me search for more specific information about how OrderedEvent is used in the codebase, particularly in tests and database operations. This will help us verify the implications of the new derives.
Ohayo sensei! Let me search for more specific details about how OrderedEvent is used in tests and database models, especially focusing on the serialization aspects since we're adding serde derives.
Ohayo sensei! Based on our investigation, I can now provide a conclusive response about the OrderedEvent struct changes.
The new derives are safe to add without database or test implications.
The struct
OrderedEvent
is conditionally derivingArbitrary
andSerialize
/Deserialize
behind feature flags, which means:
- The
arbitrary
derive is only active when testing/fuzzing- The
serde
derives are optional and won't affect existing database operationsOur thorough investigation of the codebase showed that while
OrderedEvent
appears in many files, the new derives are properly feature-gated and won't cause any breaking changes to existing functionality.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if OrderedEvent is used in database-related code echo "Checking database usage..." rg -l "OrderedEvent.*save|store|insert|update" # Check test coverage echo "Checking test coverage..." rg -l "use.*arbitrary.*OrderedEvent|#\[test\].*OrderedEvent"Length of output: 10877
Script:
#!/bin/bash # Let's check for test coverage more specifically echo "Checking test files containing OrderedEvent..." rg -l "OrderedEvent" $(fd -e rs -e cairo test) # Check for actual usage of OrderedEvent in database operations echo "Checking database operations with OrderedEvent..." rg -B 2 -A 2 "OrderedEvent.*" crates/torii/migrations/20240314182410_event_model.sql # Check for database-related code using OrderedEvent echo "Checking database code..." rg -B 2 -A 2 "OrderedEvent" crates/torii/core/src/sql/Length of output: 1830
Script:
#!/bin/bash # Let's check the actual implementation and usage in test files echo "Checking test implementations..." rg -C 5 "OrderedEvent" crates/torii/core/src/sql/test.rs # Check if OrderedEvent is used in database models or schemas echo "Checking database models..." rg -C 5 "OrderedEvent" crates/torii/core/src/model.rs # Check if there are any serialization/deserialization tests echo "Checking serde usage..." rg -C 5 "OrderedEvent.*serde|OrderedEvent.*serialize|OrderedEvent.*deserialize"Length of output: 389
crates/katana/executor/benches/concurrent.rs (1)
30-30
: Ohayo sensei! Verify performance characteristics with new provider.The change from
test_in_memory_provider()
totest_provider()
aligns with the PR's objective of standardizing on the database backend. However, this transition might affect benchmark results.Let's analyze the performance impact:
crates/saya/core/src/blockchain/mod.rs (2)
5-5
: Ohayo! Clean import changes, sensei!The replacement of InMemoryProvider with DbProvider aligns perfectly with the PR's objective of simplifying the database backend.
73-73
: Implementation looks solid, sensei!The transition to DbProvider::new_ephemeral() maintains the same temporary storage semantics as the previous in-memory implementation while standardizing the backend.
Let's verify that all tests are updated to handle the new provider:
crates/katana/storage/provider/tests/contract.rs (3)
4-4
: LGTM! Import changes align with removal of in-memory provider.Ohayo sensei! The updated imports correctly reflect the transition away from the in-memory provider.
Line range hint
29-67
: LGTM! Latest module successfully transitioned to db/fork providers.The test structure maintains good coverage while cleanly removing in-memory provider dependencies.
Line range hint
1-156
: Verify complete removal of in-memory provider references.Let's ensure no other references to the in-memory provider remain in the codebase.
crates/katana/primitives/src/version.rs (2)
7-7
: Ohayo sensei! Nice addition of arbitrary support!The conditional derivation of
Arbitrary
is a great addition that enables property-based testing for theProtocolVersion
struct. This will help catch edge cases and improve test coverage.
7-7
: Consider adding tests for the arbitrary feature, sensei.Since we've added the
arbitrary
derive macro, it would be valuable to add tests that verify the arbitrary generation works correctly. This ensures the generated instances are valid protocol versions.Here's a verification script to check if other arbitrary implementations exist in the codebase that we can use as reference:
Would you like me to provide an example test implementation for the arbitrary feature?
crates/katana/storage/provider/tests/class.rs (1)
Line range hint
1-179
: Ohayo sensei! The test structure looks solid.The test organization is well-thought-out with:
- Clear separation between latest and historical class tests
- Reusable test templates with comprehensive test cases
- Strong assertions for class state verification
- Good error handling and documentation
This aligns well with the PR's objective of simplifying the database backend while maintaining robust test coverage.
crates/katana/primitives/src/receipt.rs (3)
9-11
: Ohayo sensei! The Event struct attributes look good!The conditional derive attributes for
arbitrary
andserde
are correctly implemented usingcfg_attr
, enabling seamless serialization for database storage.
22-24
: The MessageToL1 struct attributes are properly configured!The conditional derives align with the database backend migration strategy.
35-37
: Excellent consistency across all receipt types, sensei!The uniform application of
arbitrary
andserde
derives across all receipt types (InvokeTxReceipt
,DeclareTxReceipt
,L1HandlerTxReceipt
,DeployAccountTxReceipt
, and theReceipt
enum) ensures:
- Consistent serialization behavior for database storage
- Comprehensive testing capabilities via arbitrary derivation
- Clean transition from in-memory to database-backed storage
This systematic approach aligns perfectly with the PR's objective of simplifying the database backend.
Also applies to: 52-54, 69-71, 88-90, 107-109
crates/katana/storage/provider/tests/storage.rs (1)
4-4
: LGTM! Import changes align with removal of in-memory provider.Ohayo sensei! The updated imports correctly reflect the removal of the in-memory provider, maintaining only the necessary imports for fork and database providers.
crates/saya/provider/src/rpc/mod.rs (1)
11-11
: Ohayo! Clean import addition, sensei!The new import aligns well with the PR's objective of standardizing the database backend implementation.
crates/katana/primitives/src/block.rs (2)
Line range hint
78-95
: Review compute_hash() implementation for potential hash collisions.The
compute_hash()
method usesFelt::ZERO
for several header fields includingstate_root
,transaction_commitment
,event_commitment
, andprotocol_version
. This could potentially lead to hash collisions if these fields differ between blocks.Consider including all header fields in the hash computation to ensure uniqueness:
pub fn compute_hash(&self) -> Felt { compute_hash_on_elements(&vec![ self.number.into(), // block number - Felt::ZERO, // state root + self.state_root, // state root self.sequencer_address.into(), // sequencer address self.timestamp.into(), // block timestamp - Felt::ZERO, // transaction commitment - Felt::ZERO, // event commitment - Felt::ZERO, // protocol version + self.transactions_commitment, // transaction commitment + self.events_commitment, // event commitment + self.protocol_version.into(), // protocol version Felt::ZERO, // extra data self.parent_hash, // parent hash + self.l1_da_mode.into(), // L1 DA mode ]) }Also applies to: 124-140
Line range hint
59-69
: Address the TODO comment about renaming fields.The TODO suggests renaming
eth
towei
andstrk
tofri
to better reflect their units. Let's verify if these terms are used consistently across the codebase.Would you like me to help implement the renaming and update all occurrences?
crates/katana/storage/provider/tests/fixtures.rs (2)
162-164
: Ohayo sensei! Clean transition to DB provider.The changes effectively standardize on
DbProvider
as the default while maintaining genericity. The implementation properly handles state updates across different blocks, providing good test coverage.
68-68
: Ohayo sensei! Verify test database isolation.The removal of the RW argument from
create_test_db
simplifies the code, but we should ensure test isolation is maintained.✅ Verification successful
Let me check the implementation of
create_test_db
to understand how test isolation is handled.
Let me check how the ephemeral database is initialized to ensure test isolation.
Ohayo sensei! Let me check the implementation of
DbEnv::open_ephemeral()
to ensure each test gets its own isolated database.
Ohayo sensei! Test isolation is properly maintained!
The removal of the RW argument is safe because
create_test_db()
creates an ephemeral database that:
- Uses a unique temporary directory for each test via
tempfile::Builder
- Gets automatically cleaned up when dropped (as seen in the
Drop
implementation)- Has consistent configuration across all tests
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if other test files rely on specific DB environment configurations # Test: Search for create_test_db usage with explicit configurations rg "create_test_db\(" --type rustLength of output: 1207
Script:
#!/bin/bash # Check the implementation of create_test_db rg -A 10 "fn create_test_db" --type rustLength of output: 881
Script:
#!/bin/bash # Check the implementation of init_ephemeral_db rg -A 15 "fn init_ephemeral_db" --type rustLength of output: 1219
Script:
#!/bin/bash # Check the implementation of open_ephemeral rg -A 15 "fn open_ephemeral" --type rustLength of output: 1427
Cargo.toml (2)
121-121
: Ohayo sensei! Nice addition of property-based testing capabilities!The addition of the
arbitrary
crate with thederive
feature will enhance our testing capabilities by enabling property-based testing. This is particularly valuable when testing the database backend changes.
248-248
: Consistent feature addition for Starknet typesAdding the
arbitrary
feature tostarknet-types-core
aligns well with the previous addition, enabling property-based testing for Starknet types. This will help ensure robust testing of the database backend integration.crates/katana/rpc/rpc-types/src/receipt.rs (1)
3-3
: Ohayo! Clean import addition, sensei!The import of
PriceUnit
is well-organized alongside related fee types, maintaining good code organization.crates/katana/rpc/rpc-types/src/block.rs (1)
Line range hint
1-300
: LGTM! Changes align well with PR objectivesThe modifications to standardize L1 data availability mode handling across different block types support the PR's goal of simplifying and standardizing the codebase. The consistent pattern of converting between internal and external types will make maintenance easier.
crates/katana/executor/tests/fixtures/mod.rs (2)
27-27
: Ohayo sensei! Import changes look good!The replacement of InMemoryProvider with DbProvider aligns perfectly with the PR's objective of simplifying the database backend.
Line range hint
73-84
: Verify test coverage with new DbProviderLet's ensure all test files are properly utilizing this fixture with the new DbProvider implementation.
crates/katana/storage/provider/tests/block.rs (3)
26-27
: Ohayo sensei! Clean import changes look good!The imports have been properly cleaned up after removing the in-memory provider, maintaining only the necessary dependencies for fork and db providers.
Line range hint
1-324
: LGTM! Changes align perfectly with PR objectives.The removal of in-memory provider tests while maintaining comprehensive test coverage through fork and db providers contributes to a cleaner and more maintainable codebase. The test implementations remain thorough, covering various scenarios including:
- Different block counts (1, 2, 5, 10)
- Empty blocks
- State updates at different block numbers
Line range hint
1-324
: Verify test coverage after in-memory provider removal.The test coverage remains comprehensive with both fork and db providers, but let's verify we haven't lost any unique test cases that were previously covered by the in-memory provider.
✅ Verification successful
The previous output showed that there was an in-memory provider test for state updates, but we need to verify if its functionality is fully covered by the current fork and db provider tests. This script will help us understand the current test coverage in detail.
Ohayo sensei! After analyzing the test coverage, I can now provide a conclusive response.
Test coverage remains robust after in-memory provider removal
The removed in-memory provider test (
test_read_state_update_with_in_memory_provider
) was using the same test implementation (test_read_state_update_impl
) that's currently used by both fork and db providers. All test cases and assertions are preserved through:
- Comprehensive block insertion tests with varying block counts (1, 2, 5, 10)
- Empty block handling
- State update verification at different block numbers
- Extensive assertions (40+ verification points) covering all provider functionality
The test implementation is shared via generic functions, ensuring consistent behavior verification across providers:
insert_block_test_impl<Db>
insert_block_empty_test_impl<Db>
test_read_state_update_impl<Db>
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any unique test cases in the removed in-memory provider tests # Search for test functions and assertions in the git history git log -p --full-history -- "crates/katana/storage/provider/tests/block.rs" | \ rg -A 5 "test.*in_memory_provider|assert.*in_memory" || \ echo "No unique test cases found in removed in-memory provider tests" # Verify test coverage in current implementation rg "assert_eq|assert" "crates/katana/storage/provider/tests/block.rs"Length of output: 1703
Script:
#!/bin/bash # Let's check the current test coverage more thoroughly # Check all test functions in the current implementation echo "=== Current Test Functions ===" rg "^fn.*test.*\(" "crates/katana/storage/provider/tests/block.rs" # Check all assertions in the current implementation echo -e "\n=== Current Assertions ===" rg "assert" "crates/katana/storage/provider/tests/block.rs" # Check test templates and test cases echo -e "\n=== Test Templates and Cases ===" rg "#\[template\]|#\[case::" "crates/katana/storage/provider/tests/block.rs"Length of output: 4148
crates/katana/core/src/backend/storage.rs (2)
9-9
: Ohayo sensei! Clean import additions.The new imports for
L1DataAvailabilityMode
andPriceUnit
are well-organized and align with the PR's objective of standardizing the database backend.Also applies to: 225-225
235-235
: Excellent test adaptation, sensei!The replacement of
InMemoryProvider
withDbProvider::new_ephemeral()
aligns perfectly with the PR's objective of standardizing the database backend while maintaining proper test isolation.Also applies to: 247-247
crates/katana/storage/db/src/tables.rs (1)
291-291
: Ohayo sensei! LGTM on the PriceUnit import.The addition of
PriceUnit
fromkatana_primitives::fee
aligns well with the standardization effort and supports the new fee structure in the database schema.crates/katana/rpc/rpc/src/starknet/mod.rs (1)
35-35
: Ohayo! Clean import addition, sensei!The addition of
PriceUnit
to the imports aligns well with the standardization efforts and is properly grouped with related types.crates/katana/primitives/src/chain_spec.rs (2)
12-12
: Ohayo, sensei! Import changes look good!Moving
L1DataAvailabilityMode
to an internal module aligns well with the PR's objective of simplifying the backend by reducing external dependencies.Also applies to: 233-233
Line range hint
233-600
: Excellent test coverage, sensei!The test suite is comprehensive and well-structured, covering:
- Contract deployments
- Storage updates
- Balance calculations
- Metadata verification
- Edge cases
This thorough testing approach will help maintain stability during the backend simplification.
crates/katana/storage/provider/src/providers/db/mod.rs (1)
60-66
: LGTM! Clean implementation of ephemeral database provider.The implementation is concise and follows Rust idioms. The error message in
expect
is clear and helpful.crates/katana/primitives/src/fee.rs (4)
6-6
: Verify the consistency of data types formax_amount
andmax_price_per_unit
Consider verifying if
max_amount
should be of typeu128
instead ofu64
for consistency, especially if resource amounts might exceedu64
limits.Also applies to: 8-8
11-17
: Ohayo, sensei! TheResourceBoundsMapping
struct is well-structured.
19-27
: Ohayo, sensei! ThePriceUnit
enum is correctly defined with proper serialization attributes.
Line range hint
32-41
: Ohayo, sensei! The updatedTxFeeInfo
struct withunit: PriceUnit
looks good.crates/katana/storage/db/src/models/storage.rs (3)
10-10
: Ohayo, sensei! Great enhancement with#[cfg_attr(test, derive(::arbitrary::Arbitrary))]
onStorageEntry
.Deriving
Arbitrary
improves testability by enabling property-based testing forStorageEntry
, which is beneficial for ensuring robustness.
38-38
: Ohayo, sensei! Excellent addition of#[cfg_attr(test, derive(::arbitrary::Arbitrary))]
toContractStorageKey
.This change facilitates generating arbitrary instances during testing, enhancing the coverage and reliability of tests involving
ContractStorageKey
.
64-64
: Ohayo, sensei! Nice work derivingArbitrary
forContractStorageEntry
.Including
#[cfg_attr(test, derive(::arbitrary::Arbitrary))]
allows for more comprehensive property-based tests on contract storage entries, contributing to overall code quality.crates/katana/storage/provider/src/providers/in_memory/state.rs (1)
90-90
:⚠️ Potential issueOhayo, sensei! Remove the unused commented code for
InMemorySnapshot
.The line at 90 contains a commented-out type definition for
InMemorySnapshot
. Since the in-memory provider is being removed as per the PR objectives, andInMemorySnapshot
is no longer in use, it's recommended to remove this obsolete code to maintain code cleanliness.Run the following script to verify that
InMemorySnapshot
is not referenced elsewhere in the codebase:Apply this diff to remove the commented-out code:
-// pub(super) type InMemorySnapshot = StateSnapshot<()>;
✅ Verification successful
Since the search results show that
InMemorySnapshot
only appears in the commented line that we're reviewing and nowhere else in the codebase, I can generate the final response now.Ohayo, sensei! Yes, please remove that commented code!
The verification confirms that
InMemorySnapshot
is only present in this single commented line and isn't referenced anywhere else in the codebase. It's safe to remove this obsolete code.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Ensure no references to `InMemorySnapshot` remain in the codebase. # Test: Search for any occurrences of `InMemorySnapshot`. Expect: No matches. rg 'InMemorySnapshot'Length of output: 142
crates/katana/primitives/src/chain.rs (2)
8-8
: LGTM!Ohayo, sensei! The addition of the
Arbitrary
derive forNamedChainId
enhances testability through property-based testing. This looks good.
76-76
: LGTM!Ohayo, sensei! The addition of the
Arbitrary
derive forChainId
similarly improves testability. Great job!crates/katana/primitives/src/trace.rs (6)
12-19
: Ohayo sensei! ConvertingExecutionResources
to a struct improves clarity and extensibilityChanging from a type alias to a struct allows for better encapsulation of execution resource data and facilitates future expansions.
Line range hint
43-49
: Ohayo sensei! The newTxResources
struct effectively aggregates transaction resource metricsThis struct provides a comprehensive view of transaction resources, aligning transaction execution information cohesively.
53-56
: Ohayo sensei! IntroducingL1Gas
struct enhances gas consumption trackingBy separating
l1_gas
andl1_data_gas
, the code now more precisely accounts for different gas costs associated with L1 interactions.
Line range hint
62-68
: Ohayo sensei! TheCallType
enum adds clarity to call executionsDifferentiating between
Call
andDelegate
types clarifies the nature of contract calls and can prevent inadvertent misuse.
Line range hint
73-78
: Ohayo sensei! TheEntryPointType
enum improves entry point categorizationCategorizing entry points as
External
,L1Handler
, orConstructor
enhances execution flow management and readability.
129-129
: Ohayo sensei! Verify the impact of adding#[serde(transparent)]
toBuiltinCounters
Applying
#[serde(transparent)]
alters the serialization behavior ofBuiltinCounters
. Ensure this change is intentional and compatible with existing serialization logic.Run the following script to identify how
BuiltinCounters
is used and serialized across the codebase:✅ Verification successful
Let me gather more context about the
BuiltinCounters
struct and its serialization.The previous results show that
BuiltinCounters
is a wrapper aroundHashMap<BuiltinName, usize>
and is used in trace-related functionality. I want to see its complete implementation and any tests to understand the impact of adding#[serde(transparent)]
.
Ohayo sensei! The
#[serde(transparent)]
addition is safe and appropriateThe
BuiltinCounters
is a newtype wrapper aroundHashMap<BuiltinName, usize>
that filters out zero entries. The#[serde(transparent)]
attribute is perfectly suited here as it makes the serialization delegate directly to the innerHashMap
, which is exactly what we want since:
- The struct only contains a single field
- The
From
implementation shows that the wrapper's only purpose is to provide a filtered view of the HashMap- No custom serialization logic is needed - we want the exact same format as the inner HashMap
The code shows this struct is used in RPC traces (
crates/katana/rpc/rpc/src/starknet/trace.rs
) where consistent serialization is important, and the transparent attribute ensures clean, straightforward serialization behavior.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find usages of `BuiltinCounters` and check for serialization implementations. # Find all usages of `BuiltinCounters`. rg 'BuiltinCounters' # Find any custom serialization or deserialization implementations for `BuiltinCounters`. rg 'impl.*Serialize.*for BuiltinCounters' rg 'impl.*Deserialize.*for BuiltinCounters'Length of output: 753
Script:
#!/bin/bash # Let's examine the full implementation of BuiltinCounters ast-grep --pattern 'struct BuiltinCounters { $$$ }' # Check the actual file content to see the struct definition and its attributes cat crates/katana/primitives/src/trace.rs # Look for any tests involving BuiltinCounters serialization rg -l "test.*BuiltinCounters.*serde" rg -l "test.*BuiltinCounters.*serialize"Length of output: 8486
crates/saya/provider/src/rpc/transaction.rs (2)
3-4
: Ohayo, sensei! Imports forDataAvailabilityMode
andResourceBounds
added correctly.These imports ensure that the necessary types are available for the new data transformations.
176-189
: Ohayo, sensei! Validate fields infrom_rpc_resource_bounds
for completeness.Ensure that
rpc_bounds.l1_gas
andrpc_bounds.l2_gas
contain valid data before accessing their fields. This will prevent potential runtime errors if any of these fields areNone
or invalid.Run the following script to check for the presence and validity of
l1_gas
andl2_gas
fields:crates/katana/node/src/lib.rs (2)
Line range hint
38-38
: Properly importingDbEnv
for database initializationOhayo, sensei! The import of
DbEnv
fromkatana_db::mdbx
is necessary for initializing the database environment, replacing the in-memory provider as per the PR objectives.
189-190
: Correctly initializing ephemeral database when no database path is providedOhayo, sensei! The code properly initializes an ephemeral database using
katana_db::init_ephemeral_db()
, which aligns with the goal of simplifying the database backend by removing the in-memory provider.crates/katana/rpc/rpc/src/starknet/trace.rs (2)
17-18
: Ohayo, sensei! The new imports are appropriateThe additions of
PriceUnit
,RevertedInvocation
,SimulatedTransaction
,TransactionTrace
, andTransactionTraceWithHash
to the import list are necessary for the updated implementations below and enhance code clarity.
297-300
:⚠️ Potential issueEnsure exhaustive match on
fee.unit
into_rpc_fee_estimate
Currently, the match statement handles
Wei
andFri
, but ifkatana_primitives::fee::PriceUnit
gains new variants in the future, this could lead to non-exhaustive pattern errors or unexpected behavior.Consider adding a wildcard arm to handle any additional variants:
unit: match fee.unit { katana_primitives::fee::PriceUnit::Wei => PriceUnit::Wei, katana_primitives::fee::PriceUnit::Fri => PriceUnit::Fri, + _ => PriceUnit::Unknown, }
Alternatively, if all possible variants are covered and no others will be added, adding an explanatory comment can improve maintainability.
Likely invalid or redundant comment.
crates/katana/primitives/src/transaction.rs (18)
7-8
: Ohayo, sensei! Appropriate addition of necessary importsThe
DataAvailabilityMode
andResourceBoundsMapping
imports are correctly introduced to support new functionalities.
25-25
: Ohayo, sensei! Enhanced serialization and testing capabilities forTxType
Adding
serde
serialization andarbitrary
derivations toTxType
enables better interoperability and testing.
46-46
: Ohayo, sensei! Added serialization and arbitrary derivations forTx
Including
serde
andarbitrary
derives onTx
improves serialization and testing support.
227-227
: Ohayo, sensei! Correct usage ofFelt::from
forsender_address
inInvokeTx::V1
Converting
sender_address
toFelt
ensures consistent hash calculation.
236-236
: Ohayo, sensei! Consistent hash calculation inInvokeTx::V3
Using
Felt::from(tx.sender_address)
aligns withInvokeTxV1
and maintains consistency.
357-357
: Ohayo, sensei! Consistent hash calculation inDeclareTx::V1
Using
Felt::from(tx.sender_address)
ensures correctness in hash calculation.
366-366
: Ohayo, sensei! Correct hash calculation inDeclareTx::V2
The hash calculation method accurately incorporates
compiled_class_hash
and maintains consistency.
376-376
: Ohayo, sensei! Proper hash calculation inDeclareTx::V3
The hash computation correctly includes new fields like
resource_bounds
andpaymaster_data
.
396-396
: Ohayo, sensei! Added serialization and arbitrary derives toL1HandlerTx
Enhancing
L1HandlerTx
withserde
andarbitrary
derives improves serialization and testing capabilities.
422-422
: Ohayo, sensei! Correct conversion in hash calculation forL1HandlerTx
Using
Felt::from(self.contract_address)
ensures consistent and correct hash calculation.
511-511
: Ohayo, sensei! Consistent hash calculation inDeployAccountTx::V1
Using
Felt::from(tx.contract_address)
ensures correctness.
522-522
: Ohayo, sensei! Correct hash calculation inDeployAccountTx::V3
Hash calculation for
DeployAccountTxV3
incorporates new fields consistently.
432-432
: Ohayo, sensei! Addition ofV3
variant toDeployAccountTx
Ensure that all logic handling
DeployAccountTx
accounts for the newV3
variant to prevent runtime errors.Use the following script to find potential missing handlers:
#!/bin/bash # Description: Find match statements on `DeployAccountTx` potentially missing `V3` variant ast-grep --pattern $'match $_ { DeployAccountTx::V1($$) => $_, // Missing V3 variant }'
474-474
: Ohayo, sensei! Introduction ofDeployAccountTxV3
The
DeployAccountTxV3
struct extends deploy account transactions with additional fields likeresource_bounds
andtip
. Verify that it's correctly integrated into transaction processing.Check that functions processing
DeployAccountTx
handleV3
appropriately:#!/bin/bash # Description: Find functions matching on `DeployAccountTx` that may need to handle `V3` ast-grep --pattern $'fn $_($$) { match $_ { DeployAccountTx::V1($$) => $_, // May be missing `V3` variant } }'
320-320
: Ohayo, sensei! Introduction ofDeclareTxV3
DeclareTxV3
adds new fields likeresource_bounds
andtip
. Verify that this new version is integrated into the transaction handling logic.Check that functions processing
DeclareTx
handleV3
appropriately:#!/bin/bash # Description: Find functions matching on `DeclareTx` that may need to handle `V3` ast-grep --pattern $'fn $_($$) { match $_ { DeclareTx::V1($$) => $_, DeclareTx::V2($$) => $_, // May be missing `V3` variant } }'
187-187
: Ohayo, sensei! Introduction ofInvokeTxV3
The new
InvokeTxV3
struct extends transaction capabilities with additional fields likeresource_bounds
andtip
. Ensure that this new version is correctly integrated into the transaction processing logic.Check that all functions handling
InvokeTx
accommodate theV3
variant:#!/bin/bash # Description: Find functions matching on `InvokeTx` that may need to handle `V3` ast-grep --pattern $'fn $_($$) { match $_ { InvokeTx::V1($$) => $_, // May be missing V3 variant } }'
254-254
: Ohayo, sensei! AddedV3
variant toDeclareTx
enumEnsure that all match statements and logic processing
DeclareTx
handle the newV3
variant to prevent possible runtime errors.Use the following script to locate match statements that might need updates:
✅ Verification successful
Let me try a different approach to find match statements on
DeclareTx
.This will help us locate any match expressions involving
DeclareTx
and see their patterns. Let's also search for any direct usage of the enum variants:And let's check the enum definition itself to understand its structure:
Ohayo, sensei! All match statements are properly handling the V3 variant
Based on the search results, I can confidently verify that all match statements handling
DeclareTx
already include theV3
variant. The evidence shows:
- In
rpc-types/src/transaction.rs
:
- Complete match coverage for V1, V2, and V3 variants in type conversions
- In
primitives/src/transaction.rs
:
- All variants handled for class hash computation
- All variants handled for transaction hash computation
- In
pool/src/tx.rs
:
- Complete coverage for nonce, sender_address, max_fee, and tip handling
- Special V3-specific logic for max_fee (returns 0) and tip
- In
executor/src/implementation/blockifier/utils.rs
:
- All three variants (V1, V2, V3) are properly matched
The codebase appears to have been thoroughly updated to handle the V3 variant in all relevant locations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find match statements on `DeclareTx` potentially missing `V3` variant ast-grep --pattern $'match $_ { DeclareTx::V1($$) => $_, DeclareTx::V2($$) => $_, // Missing V3 variant }'Length of output: 343
Script:
#!/bin/bash # First, let's find files that contain DeclareTx rg "DeclareTx" -l # Then look for match statements in those files rg "match.*DeclareTx" -A 5Length of output: 479
Script:
#!/bin/bash # Find usage of DeclareTx variants rg "DeclareTx::(V1|V2|V3)"Length of output: 3173
Script:
#!/bin/bash # Find the DeclareTx enum definition ast-grep --pattern 'enum DeclareTx { $$$ }'Length of output: 47
158-158
: Ohayo, sensei! Ensure allInvokeTx
variants are handled appropriatelyWith the addition of
InvokeTxV3
, please verify that all pattern matches and logic handlingInvokeTx
cover the new variant to prevent potential runtime errors.Consider running the following script to find match statements that might need updates:
✅ Verification successful
Let me try a different approach to search for match statements on
InvokeTx
.This will help us identify where
InvokeTx
is being matched and what variants are being handled. Let's also check the definition ofInvokeTx
to understand all its variants:Additionally, let's search for any potential pattern matching on this type:
Ohayo! All
InvokeTx
variants are properly handled, sensei!After reviewing the codebase, I can confirm that all pattern matches on
InvokeTx
properly handle both V1 and V3 variants. Here's what I found:
In
pool/src/tx.rs
, all matches cover both variants for:
- Nonce handling:
InvokeTx::V1(v1)
andInvokeTx::V3(v3)
- Sender address: Both variants handled
- Max fee: Both V1 and V3 (V3 explicitly returns 0)
- Tip: Both V1 (returns 0) and V3 handled
In
primitives/src/transaction.rs
, hash calculation handles both variants:
InvokeTx::V1(tx)
usingcompute_invoke_v1_tx_hash
InvokeTx::V3(tx)
usingcompute_invoke_v3_tx_hash
In
blockifier/utils.rs
, transaction conversion handles both:
InvokeTx::V1(tx)
for V1 transactionsInvokeTx::V3(tx)
for V3 transactions🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all match statements on `InvokeTx` not covering the new `V3` variant ast-grep --pattern $'match $_ { InvokeTx::V1($$) => $_, // Missing V3 variant }'Length of output: 343
Script:
#!/bin/bash # First, let's find all files that contain "InvokeTx" rg "InvokeTx" -l # Then, let's look for match statements in those files rg "match.*InvokeTx" -A 5Length of output: 752
Script:
#!/bin/bash # Find the enum definition rg "enum InvokeTx" -A 10Length of output: 870
Script:
#!/bin/bash # Look for pattern matching on InvokeTx with more context rg "InvokeTx::" -B 2 -A 5Length of output: 18032
crates/katana/storage/db/src/mdbx/mod.rs (2)
10-11
: Ohayo, sensei! Good inclusion of necessary importsThe addition of
std::path::{Path, PathBuf}
andstd::sync::Arc
is appropriate and necessary for the new implementation.
Line range hint
524-558
: Ordering issue in iteration over duplicate keysOhayo, sensei! In the
db_dup_sort
test, when iterating over duplicate keys, the entries are expected to be in a specific order, but this is not guaranteed.Please verify that the ordering of entries with duplicate keys is consistent and aligns with expectations.
crates/katana/primitives/src/utils/transaction.rs (6)
6-7
: Ohayo sensei! Great Shift to Local Types for Improved ClarityI noticed you've updated the imports to use
DataAvailabilityMode
andResourceBounds
fromcrate::da
andcrate::fee
respectively. This enhances modularity and maintains consistency within our codebase. Nice work!
Line range hint
297-304
: Ohayo sensei! Verify Correctness ofhash_fee_fields
FunctionThe
hash_fee_fields
function now includes encoding of gas bounds. Ensure that the order and encoding align with the expected specification for Poseidon hash inputs.Please confirm the correctness by checking the specification or running test vectors if available.
Line range hint
306-311
: Ohayo sensei! Confirmencode_da_mode
Bit Shifting LogicIn
encode_da_mode
, thenonce_da_mode
is shifted left by 32 bits before addingfee_da_mode
. Ensure that this concatenation reflects the intended encoding, and that the data availability modes are correctly represented in the resultingFelt
.Consider adding tests to validate that different mode combinations produce the expected output.
Line range hint
104-137
: Ohayo sensei! Updatedcompute_declare_v3_tx_hash
Function SignatureThe inclusion of
l1_gas_bounds
,l2_gas_bounds
,nonce_da_mode
, andfee_da_mode
incompute_declare_v3_tx_hash
enhances transaction handling by incorporating resource bounds and data availability modes. Ensure all invocations of this function are updated accordingly.To confirm all uses are updated, you can run:
#!/bin/bash # Description: Locate calls to `compute_declare_v3_tx_hash` and inspect their parameters. rg 'compute_declare_v3_tx_hash\('
Line range hint
42-74
: Ohayo sensei! Ensure All Function Calls Reflect New ParametersYou've added new parameters
l1_gas_bounds
,l2_gas_bounds
,nonce_da_mode
, andfee_da_mode
tocompute_deploy_account_v3_tx_hash
. This change improves the function's capability to handle resource bounds and data availability modes explicitly. Please verify that all calls to this function across the codebase have been updated to match the new signature to prevent any compilation issues.Please run the following script to find and review all usages:
#!/bin/bash # Description: Find all calls to `compute_deploy_account_v3_tx_hash` and check their parameters. # Search for function calls rg 'compute_deploy_account_v3_tx_hash\('
Line range hint
169-202
: Ohayo sensei! Enhancedcompute_invoke_v3_tx_hash
with New ParametersBy adding
l1_gas_bounds
,l2_gas_bounds
,nonce_da_mode
, andfee_da_mode
tocompute_invoke_v3_tx_hash
, you've improved the function's expressiveness regarding resource bounds and data availability. Please verify that all function calls are updated to match this new signature.Use the following script to find all occurrences:
✅ Verification successful
Let me search for the correct function name since the previous search had incorrect function names.
Let me check the actual implementation of the function call to verify if it's properly updated.
Ohayo sensei! All function calls are properly updated!
The verification shows that the only call to
compute_invoke_v3_tx_hash
intransaction.rs
correctly includes all the new parameters:
l1_gas_bounds
l2_gas_bounds
nonce_da_mode
fee_da_mode
The implementation matches the enhanced function signature perfectly, so no further updates are needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find and review calls to `compute_invoke_v3_tx_hash`. rg 'compute_invoke_v3_tx_hash\('Length of output: 886
Script:
#!/bin/bash # Description: Find and review calls to `compute_invoke_v3_tx_hash` rg 'compute_invoke_v3_tx_hash'Length of output: 432
Script:
#!/bin/bash # Show the implementation where compute_invoke_v3_tx_hash is called rg -A 10 'InvokeTx::V3\(tx\).*compute_invoke_v3_tx_hash'Length of output: 1006
crates/katana/executor/src/implementation/blockifier/state.rs (3)
242-242
: Ohayo, sensei! Transition toDbProvider
reflects the move to a database-backed provider.The import of
katana_provider::providers::db::DbProvider
replaces theInMemoryProvider
, aligning with the PR objectives to simplify the backend by using a standardized database provider.
455-455
: Ohayo, sensei! UsingDbProvider::new_ephemeral()
ensures isolated testing environments.The creation of an ephemeral
DbProvider
in thefetch_non_existant_data
test is appropriate. It provides a clean, in-memory database for each test run, which aligns with best practices for unit testing and eliminates potential side effects between tests.
270-270
: Confirm thattest_provider()
returns a correctly configuredDbProvider
.Ensure that
katana_provider::test_utils::test_provider()
is appropriately returning aDbProvider
instance initialized for testing purposes. This will maintain consistent behavior after transitioning from theInMemoryProvider
.You can run the following script to verify the implementation:
✅ Verification successful
The previous search didn't find the file because the path might be incorrect. Let's try to locate the test utilities and the provider implementation to verify the test_provider function's behavior.
Ohayo sensei! I can now provide a definitive response based on the verification results.
test_provider()
is correctly configured and returns a properly initializedDbProvider
The implementation in
crates/katana/storage/provider/src/test_utils.rs
shows that:
- It creates a new
DbProvider
with a test database usingDbProvider::new(test_utils::create_test_db())
- Initializes it with genesis block and states via
initialize_test_provider()
- Returns the properly configured provider
The transition from
InMemoryProvider
is handled correctly, and the test provider is well-structured for testing purposes.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that `test_provider()` returns a `DbProvider` instance. # Expected output: The function should instantiate a `DbProvider`. rg -A 5 'fn test_provider\(\)' crates/katana/provider/src/test_utils.rsLength of output: 154
Script:
#!/bin/bash # Let's first find the correct path for the test_utils module fd test_utils.rs # Then let's search for test_provider function in all Rust files rg -A 10 'pub fn test_provider'Length of output: 1046
Script:
#!/bin/bash # Let's also check for any provider-related test utilities rg -A 10 'fn test_provider|struct DbProvider'Length of output: 1854
Script:
#!/bin/bash # Let's check for any test provider implementations in test modules ast-grep --pattern 'fn test_provider() -> $_ { $$$ }'Length of output: 57
crates/katana/rpc/rpc-types/src/transaction.rs (2)
12-13
: Addition of Necessary ImportsOhayo, sensei! The imports for
DataAvailabilityMode
,ResourceBounds
, andResourceBoundsMapping
are correctly added to support the new functionality.
542-584
: Verify Correctness of Conversion FunctionsOhayo, sensei! The new conversion functions
from_rpc_da_mode
,to_rpc_da_mode
,from_rpc_resource_bounds
, andto_rpc_resource_bounds
are crucial for mapping between RPC types and internal types. Please ensure these functions handle all possible variants and edge cases correctly.Here’s a script to check for any missing enum variants in the conversion functions:
This script attempts to list all enum variants and matches in the conversion functions to ensure completeness.
✅ Verification successful
Let me try a different approach to verify the enums and their conversions.
This script will help us understand:
- The definition of these enums in both places
- Any tests that verify these conversions
- Where these conversion functions are being used
- The complete structure of ResourceBounds types
This will give us a better picture of the conversion correctness.
Ohayo, sensei! Based on the search results, I can now verify the conversion functions. Let me analyze what we found:
For
DataAvailabilityMode
:
- The conversion functions handle both L1 and L2 variants correctly
- The functions are used consistently throughout the codebase for converting between RPC and internal types
- The conversions are symmetric (from_rpc_da_mode and to_rpc_da_mode match)
For
ResourceBounds
:
- The conversion functions correctly map all fields (max_amount and max_price_per_unit)
- Both l1_gas and l2_gas components are handled
- The conversions maintain the structure and types of the bounds
Usage patterns:
- The functions are used consistently in transaction handling
- They're used in pairs to maintain data consistency
- No edge cases are missed in the conversions
Conversion Functions Are Correctly Implemented
The conversion functions are complete and handle all variants and fields correctly. The implementations are symmetric and used consistently throughout the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that all variants of DataAvailabilityMode are handled in the conversion functions. # Search for variants in the RPC DataAvailabilityMode enum echo "RPC DataAvailabilityMode variants:" ast-grep --pattern 'pub enum DataAvailabilityMode { $$ }' --json # Search for variants in the internal DataAvailabilityMode enum echo "Internal DataAvailabilityMode variants:" ast-grep --pattern 'pub enum DataAvailabilityMode { $$ }' --json # Verify that each variant in the RPC enum is mapped in from_rpc_da_mode echo "Checking from_rpc_da_mode function:" ast-grep --pattern 'fn from_rpc_da_mode($_) { match $_ { $$ } }' --json # Verify that each variant in the internal enum is mapped in to_rpc_da_mode echo "Checking to_rpc_da_mode function:" ast-grep --pattern 'fn to_rpc_da_mode($_) { match $_ { $$ } }' --jsonLength of output: 637
Script:
#!/bin/bash # Search for DataAvailabilityMode in starknet core types echo "=== Searching for DataAvailabilityMode in starknet core types ===" rg -A 5 "DataAvailabilityMode" # Search for ResourceBounds and ResourceBoundsMapping echo -e "\n=== Searching for ResourceBounds and ResourceBoundsMapping definitions ===" rg -A 10 "ResourceBounds(Mapping)?" # Look for any tests of these conversion functions echo -e "\n=== Searching for tests of conversion functions ===" rg -A 5 "test.*(_da_mode|resource_bounds)" # Look for usages of these conversion functions echo -e "\n=== Searching for usages of conversion functions ===" rg "(from|to)_rpc_(da_mode|resource_bounds)"Length of output: 235368
crates/katana/executor/src/implementation/blockifier/utils.rs (6)
482-485
: Convertto_api_da_mode
function correctly.The
to_api_da_mode
function has been introduced to mapkatana_primitives::da::DataAvailabilityMode
toDataAvailabilityMode
.
574-576
: Update call toto_execution_resources
.In the
to_exec_info
function, the call toto_execution_resources
has been updated to passexec_info.transaction_receipt.resources.vm_resources
.
659-668
: Implementto_execution_resources
function correctly.The
to_execution_resources
function at lines 659-668 convertsExecutionResources
. Ensure that all fields are mapped correctly, includingn_steps
,n_memory_holes
, andbuiltin_instance_counter
.
Line range hint
490-496
: Ensure accurate conversion into_api_resource_bounds
.The
to_api_resource_bounds
function convertsResourceBoundsMapping
. Verify that the mapping betweenkatana_primitives::fee::ResourceBoundsMapping
andResourceBoundsMapping
is done accurately, especially the max amounts and max price per unit.Consider using the following script to check the correctness of the resource bounds mapping:
#!/bin/bash # Description: Verify the mapping of ResourceBounds # Search for usage of `to_api_resource_bounds` rg 'to_api_resource_bounds' -A 5
630-630
:⚠️ Potential issueCorrect the field name in
to_call_info
.At line 630, ensure that you are accessing the correct field
call.resources
instead ofcall.resources
.It seems there's a typo in accessing the resources. Apply this diff to fix it:
- execution_resources: to_execution_resources(call.resources), + execution_resources: to_execution_resources(call.execution.resources),Likely invalid or redundant comment.
48-48
: Ohayo sensei! Update the import path forPriceUnit
.The import statement for
PriceUnit
at line 48 has been updated. Ensure that the new pathkatana_primitives::fee
is correct and that all references toPriceUnit
in the code base are updated accordingly.You might want to run the following script to check for any outdated import paths:
✅ Verification successful
Let me gather more information about the usage of
PriceUnit
in the codebase.This will help us verify:
- All usages of PriceUnit in the codebase
- Current import patterns from katana_primitives::fee
- Location of PriceUnit definition in katana_primitives
Ohayo sensei! The import path for
PriceUnit
is correctly updated.Based on the verification results, I can confirm that:
- The import
use katana_primitives::fee::PriceUnit
is consistently used across the codebase- There are no remaining instances of old import paths for
PriceUnit
- The definition of
PriceUnit
exists inkatana_primitives/src/fee.rs
- All usages of
PriceUnit
are properly importing it from the new path🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that all imports of `PriceUnit` are updated to `katana_primitives::fee` # Search for old import paths rg 'use\s+starknet::core::types::PriceUnit' -A 0Length of output: 2603
Script:
#!/bin/bash # Search for any usage of PriceUnit in the codebase rg "PriceUnit" -B 2 -A 2 # Search for imports from katana_primitives::fee rg "use katana_primitives::fee::" -B 1 -A 1 # Search for the definition of PriceUnit in katana_primitives fd "fee.rs|mod.rs" --exec grep -l "PriceUnit"Length of output: 16398
fork_provider_with_spawned_fork_network, provider_with_states, DOJO_WORLD_COMPILED_CLASS, | ||
DOJO_WORLD_SIERRA_CLASS, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Found an inconsistency in the test template.
While the in-memory provider import was correctly removed, the test template test_historical_class_read
still uses InMemoryProvider
in its signature:
fn test_historical_class_read(
#[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>,
#[case] block_num: BlockNumber,
#[case] expected_class: Vec<ClassHashAndClasses>,
) {
This should be updated to use a generic type parameter to match the PR's objective of removing the in-memory provider.
Apply this change:
fn test_historical_class_read<Db>(
- #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>,
+ #[from(provider_with_states)] provider: BlockchainProvider<Db>,
#[case] block_num: BlockNumber,
#[case] expected_class: Vec<ClassHashAndClasses>,
) {
Also applies to: 142-145
l1_da_mode: match block.header.l1_da_mode { | ||
katana_primitives::da::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, | ||
katana_primitives::da::L1DataAvailabilityMode::Calldata => { | ||
L1DataAvailabilityMode::Calldata | ||
} | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Consider extracting the l1_da_mode conversion to a helper function.
The conversion pattern between katana_primitives::da::L1DataAvailabilityMode
and starknet::core::types::L1DataAvailabilityMode
is repeated multiple times in this file. To follow the DRY principle, we could extract this into a helper function.
fn convert_l1_da_mode(mode: katana_primitives::da::L1DataAvailabilityMode) -> L1DataAvailabilityMode {
match mode {
katana_primitives::da::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob,
katana_primitives::da::L1DataAvailabilityMode::Calldata => L1DataAvailabilityMode::Calldata,
}
}
Then use it like:
l1_da_mode: convert_l1_da_mode(block.header.l1_da_mode),
block.header.l1_da_mode = match forked_block.l1_da_mode { | ||
starknet::core::types::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, | ||
starknet::core::types::L1DataAvailabilityMode::Calldata => { | ||
L1DataAvailabilityMode::Calldata | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Clean type conversion implementation, sensei!
The match statement properly handles the conversion between different L1DataAvailabilityMode
types. However, we could make it more concise.
Consider simplifying the match statement:
- block.header.l1_da_mode = match forked_block.l1_da_mode {
- starknet::core::types::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob,
- starknet::core::types::L1DataAvailabilityMode::Calldata => {
- L1DataAvailabilityMode::Calldata
- }
- };
+ block.header.l1_da_mode = match forked_block.l1_da_mode {
+ starknet::core::types::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob,
+ starknet::core::types::L1DataAvailabilityMode::Calldata => L1DataAvailabilityMode::Calldata,
+ };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
block.header.l1_da_mode = match forked_block.l1_da_mode { | |
starknet::core::types::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, | |
starknet::core::types::L1DataAvailabilityMode::Calldata => { | |
L1DataAvailabilityMode::Calldata | |
} | |
}; | |
block.header.l1_da_mode = match forked_block.l1_da_mode { | |
starknet::core::types::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, | |
starknet::core::types::L1DataAvailabilityMode::Calldata => L1DataAvailabilityMode::Calldata, | |
}; |
pub fn new_ephemeral() -> Self { | ||
let db = init_ephemeral_db().expect("Failed to initialize ephemeral database"); | ||
Self(db) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding retry logic for database initialization.
The current implementation immediately panics if database initialization fails. For better reliability, consider implementing retry logic or graceful fallback.
pub fn new_ephemeral() -> Self {
const MAX_RETRIES: u32 = 3;
let mut attempts = 0;
while attempts < MAX_RETRIES {
match init_ephemeral_db() {
Ok(db) => return Self(db),
Err(e) if attempts < MAX_RETRIES - 1 => {
attempts += 1;
std::thread::sleep(std::time::Duration::from_millis(100));
continue;
}
Err(e) => panic!("Failed to initialize ephemeral database after {} attempts: {}", MAX_RETRIES, e),
}
}
unreachable!()
}
fn from_rpc_da_mode(mode: starknet::core::types::DataAvailabilityMode) -> DataAvailabilityMode { | ||
match mode { | ||
starknet::core::types::DataAvailabilityMode::L1 => DataAvailabilityMode::L1, | ||
starknet::core::types::DataAvailabilityMode::L2 => DataAvailabilityMode::L2, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Handle unexpected variants in from_rpc_da_mode
.
Currently, the from_rpc_da_mode
function does not account for potential new variants of DataAvailabilityMode
. To prevent future issues, consider adding a wildcard match arm to handle unexpected variants gracefully.
Apply this diff to handle unknown variants:
fn from_rpc_da_mode(mode: starknet::core::types::DataAvailabilityMode) -> DataAvailabilityMode {
match mode {
starknet::core::types::DataAvailabilityMode::L1 => DataAvailabilityMode::L1,
starknet::core::types::DataAvailabilityMode::L2 => DataAvailabilityMode::L2,
+ _ => {
+ eprintln!("Unknown DataAvailabilityMode variant: {:?}", mode);
+ DataAvailabilityMode::L2 // Defaulting to L2 or choose an appropriate default.
+ },
}
}
Alternatively, you could return a Result
to handle errors explicitly:
fn from_rpc_da_mode(mode: starknet::core::types::DataAvailabilityMode) -> Result<DataAvailabilityMode, ErrorType> {
match mode {
starknet::core::types::DataAvailabilityMode::L1 => Ok(DataAvailabilityMode::L1),
starknet::core::types::DataAvailabilityMode::L2 => Ok(DataAvailabilityMode::L2),
_ => Err(ErrorType::UnknownDataAvailabilityMode),
}
}
fee_data_availability_mode: to_rpc_da_mode( | ||
tx.fee_data_availability_mode, | ||
), | ||
nonce_data_availability_mode: to_rpc_da_mode( | ||
tx.nonce_data_availability_mode, | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Conversion Logic in Transaction Mapping
Ohayo, sensei! The conversion of data availability modes and resource bounds is repeated when mapping InvokeTx::V3
to the RPC transaction type. Refactoring this code can reduce duplication and potential errors.
Also applies to: 297-297
fee_data_availability_mode: to_rpc_da_mode(tx.fee_data_availability_mode), | ||
nonce_data_availability_mode: to_rpc_da_mode( | ||
tx.nonce_data_availability_mode, | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Conversion Logic in DeclareTx::V3
Ohayo, sensei! The repeated conversion assignments in DeclareTx::V3
could be refactored to adhere to the DRY principle, enhancing the code's readability and maintainability.
Also applies to: 342-342
fee_data_availability_mode: to_rpc_da_mode( | ||
tx.fee_data_availability_mode, | ||
), | ||
nonce_data_availability_mode: to_rpc_da_mode( | ||
tx.nonce_data_availability_mode, | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Conversion Logic in DeployAccountTransactionV3
Ohayo, sensei! Similar conversion logic is repeated in DeployAccountTransactionV3
. Refactoring this code can improve consistency and reduce redundancy across the codebase.
Also applies to: 388-388
fee_data_availability_mode: from_rpc_da_mode(tx.fee_data_availability_mode), | ||
nonce_data_availability_mode: from_rpc_da_mode(tx.nonce_data_availability_mode), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Repeated Conversion in From<BroadcastedInvokeTx> for InvokeTx
Ohayo, sensei! The conversion assignments are again repeated here. Consider abstracting this logic to maintain consistency and adhere to best practices.
Also applies to: 461-461
fee_data_availability_mode: from_rpc_da_mode(tx.fee_data_availability_mode), | ||
nonce_data_availability_mode: from_rpc_da_mode(tx.nonce_data_availability_mode), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Repeated Conversion in From<BroadcastedDeployAccountTx> for DeployAccountTx
Ohayo, sensei! The repeated conversion code in this implementation can be refactored to reduce duplication and enhance clarity.
Also applies to: 510-510
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2571 +/- ##
==========================================
- Coverage 69.80% 69.59% -0.21%
==========================================
Files 401 401
Lines 51179 50745 -434
==========================================
- Hits 35723 35318 -405
+ Misses 15456 15427 -29 ☔ View full report in Codecov by Sentry. |
WalkthroughOhayo, sensei! This pull request introduces several changes across various Changes
Possibly related PRs
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Outside diff range and nitpick comments (42)
crates/katana/primitives/src/lib.rs (1)
26-26
: Consider adding a comment explaining the dependency change, sensei!To help future maintainers understand the rationale, consider adding a brief comment explaining why we're using
starknet_types_core
instead ofstarknet::core::types
.+// Using starknet_types_core for better modularity and simplified dependencies pub use starknet_types_core::felt::{Felt, FromStrError};
crates/katana/primitives/src/da/mod.rs (2)
18-26
: The DataAvailabilityMode enum implementation is clean and consistent!The enum follows the same pattern as L1DataAvailabilityMode, maintaining consistency in the codebase. The L1/L2 distinction is clear and well-defined.
Consider adding a doc comment explaining the purpose of this enum, similar to the "L1 da mode" comment above L1DataAvailabilityMode:
+/// Represents the data availability layer (L1 or L2) for transactions. #[derive(Debug, Clone, Copy, PartialEq, Eq)]
Line range hint
1-6
: Consider documenting the module's submodules, sensei!While the code structure is clean, adding documentation for each submodule would improve code maintainability.
Consider adding module-level documentation:
+//! Data Availability (DA) primitives and utilities. +//! +//! This module provides types and implementations for handling data availability +//! in both L1 and L2 contexts, including blob handling and EIP-4844 support. pub mod blob; pub mod eip4844;crates/katana/storage/db/Cargo.toml (1)
33-34
: Consider organizing dev-dependencies alphabetically, sensei!While the dependencies are valid, maintaining alphabetical ordering can improve readability.
[dev-dependencies] -arbitrary.workspace = true -criterion.workspace = true -starknet.workspace = true +arbitrary.workspace = true +criterion.workspace = true +starknet.workspace = truecrates/katana/executor/benches/execution.rs (1)
Line range hint
16-29
: Ohayo! The benchmark structure looks solid, sensei!The benchmark setup maintains good practices with proper warm-up time and batch sizing. The integration with pprof for profiling is particularly valuable for analyzing performance characteristics with the new database backend.
Consider adding a comment explaining the performance characteristics of the new provider to help future maintainers understand the benchmark results.
fn executor_transact(c: &mut Criterion) { let mut group = c.benchmark_group("Invoke.ERC20.transfer"); group.warm_up_time(Duration::from_millis(200)); + // Using database-backed provider instead of in-memory for more realistic benchmarks + // Note: Results include disk I/O overhead let provider = test_utils::test_provider();crates/katana/primitives/src/contract.rs (1)
Line range hint
76-83
: The GenericContractInfo struct is well-structured, sensei!The implementation maintains consistency with ContractAddress and provides clear documentation. The fields are appropriately typed using the existing Nonce and ClassHash types.
Consider adding an example to the documentation to show typical usage:
/// Represents a generic contract instance information. +/// +/// # Example +/// ```rust +/// # use katana_primitives::contract::{GenericContractInfo, Nonce, ClassHash}; +/// let contract_info = GenericContractInfo { +/// nonce: Nonce::default(), +/// class_hash: ClassHash::default(), +/// }; +/// ``` #[derive(Debug, Copy, Clone, Default, PartialEq, Eq)]crates/katana/storage/db/src/models/contract.rs (2)
Line range hint
15-33
: Ohayo! Consider pre-allocating buffer for better performanceThe compression implementation looks correct, but we can optimize the buffer allocation by pre-calculating the exact size needed.
Here's a more efficient implementation:
impl Compress for ContractClassChange { type Compressed = Vec<u8>; fn compress(self) -> Self::Compressed { - let mut buf = Vec::new(); + let mut buf = Vec::with_capacity(32 + 32); // ContractAddress (32) + ClassHash (32) buf.extend_from_slice(self.contract_address.encode().as_ref()); buf.extend_from_slice(self.class_hash.compress().as_ref()); buf } }
Line range hint
42-65
: Similar optimization for nonce changes, sensei!The implementation is correct, but we can apply the same buffer pre-allocation optimization here.
Here's the optimized version:
impl Compress for ContractNonceChange { type Compressed = Vec<u8>; fn compress(self) -> Self::Compressed { - let mut buf = Vec::new(); + let mut buf = Vec::with_capacity(32 + 32); // ContractAddress (32) + Nonce (32) buf.extend_from_slice(&self.contract_address.encode()); buf.extend_from_slice(&self.nonce.compress()); buf } }crates/katana/storage/provider/tests/utils.rs (2)
Line range hint
26-26
: Consider enhancing test data variety, sensei!The test data currently uses zero values for
gas_consumed
,gas_price
, andoverall_fee
. While this works for basic testing, consider generating varied non-zero values to catch edge cases and ensure more robust testing.Here's a suggested improvement:
- fee: TxFeeInfo { gas_consumed: 0, gas_price: 0, overall_fee: 0, unit: PriceUnit::Wei }, + fee: TxFeeInfo { + gas_consumed: rand::random::<u64>(), + gas_price: rand::random::<u64>(), + overall_fee: rand::random::<u64>(), + unit: PriceUnit::Wei + },
Line range hint
13-13
: Ohayo! Let's address the TODO and improve test determinism, sensei!The TODO comment indicates that we need more variety in transaction and receipt variants. Additionally, consider making the random data generation deterministic by accepting an optional seed parameter for reproducible tests.
Would you like me to help implement the random transaction variants and add seed-based randomization?
crates/katana/storage/provider/src/test_utils.rs (2)
21-24
: Clean refactoring, sensei! Consider adding documentation.The renaming from
test_db_provider
totest_provider
nicely reflects its new role as the sole provider implementation. The simplified implementation usingcreate_test_db()
aligns perfectly with the PR's goal of streamlining the database backend.Consider updating the doc comment to reflect that this is now the standard provider for testing:
-/// Creates a persistent storage provider with initial states loaded for testin. +/// Creates a database-backed provider with initial states loaded for testing. +/// This is the standard provider used across all tests.
21-24
: Excellent architectural simplification, sensei!The consolidation to a single database-backed provider implementation:
- Reduces maintenance overhead
- Provides consistent behavior across tests
- Simplifies the mental model for developers
This aligns perfectly with the PR's goal of removing the in-memory provider complexity.
crates/katana/executor/tests/simulate.rs (1)
Line range hint
67-67
: Consider adding STRK-based transaction test case.The TODO comment indicates a missing test case for STRK-based transactions. Since the codebase supports both Wei and STRK price units, having comprehensive test coverage would be beneficial.
Would you like me to help create a test case for STRK-based transactions?
crates/katana/primitives/src/event.rs (1)
Line range hint
15-22
: Ohayo! Small typo in the documentation, sensei!There's a minor grammatical error in the documentation: "There JSON-RPC specification" should be "The JSON-RPC specification".
-/// There JSON-RPC specification does not specify the format of the continuation token, +/// The JSON-RPC specification does not specify the format of the continuation token,crates/katana/executor/benches/concurrent.rs (1)
Line range hint
1-117
: Consider enhancing benchmark configuration for more comprehensive resultsOhayo sensei! The benchmark setup is solid, but here are some suggestions to make it even more insightful:
- Consider adding more concurrency sizes (e.g., 10, 100, 500) to better understand scaling characteristics
- The warm-up time of 200ms might be too short for steady-state performance. Consider increasing it to 1-2 seconds
- Add memory usage measurements alongside the existing wall-time measurements
crates/saya/core/src/blockchain/mod.rs (1)
72-74
: Consider documenting the ephemeral nature of DbProvider, sensei!While the implementation change is sound, it would be helpful to document that we're using an ephemeral database provider. This information is important for users of the
Blockchain::new()
method to understand the persistence characteristics of the storage.Add a doc comment explaining the storage characteristics:
/// Initializes a new instance of [`Blockchain`]. +/// +/// Uses an ephemeral database provider that doesn't persist data between restarts. pub fn new() -> Self { Self { inner: BlockchainProvider::new(Box::new(DbProvider::new_ephemeral())) } }crates/katana/storage/provider/tests/contract.rs (2)
Line range hint
126-126
: Heads up sensei! Found an inconsistent InMemoryProvider reference.The test template is still using
InMemoryProvider
as its type parameter, but this provider is being removed as part of this PR.Apply this fix:
- #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>, + #[from(provider_with_states)] provider: BlockchainProvider<impl StateFactoryProvider>,
Line range hint
31-156
: Excellent test structure maintained, sensei!The test coverage remains comprehensive with both fork and database providers, ensuring all storage interaction paths are properly tested. The removal of in-memory provider tests doesn't create any coverage gaps since the remaining providers sufficiently test both latest and historical state access patterns.
This simplification aligns well with the principle of having fewer, more focused test implementations. The remaining providers (fork and db) better represent real-world usage patterns.
crates/katana/storage/db/src/lib.rs (2)
55-70
: Ohayo! Clean implementation of ephemeral database initialization, sensei!The implementation elegantly supports the PR objective of removing the in-memory provider while maintaining good performance characteristics. The tradeoff between durability and write performance is well-documented.
Consider adding metrics/logging to track the usage patterns of ephemeral databases in production, as this could help identify potential performance bottlenecks or unexpected persistence requirements.
147-176
: Well-structured test case for ephemeral DB lifecycle, sensei!The test thoroughly verifies the cleanup behavior and reference counting.
Consider adding these additional test cases to strengthen the test suite:
- Concurrent access to the ephemeral DB
- Error cases (e.g., directory permissions)
- Maximum size limits
Here's an example of a concurrent access test:
#[test] fn ephemeral_db_concurrent_access() { use std::sync::Arc; use std::thread; let db = Arc::new(init_ephemeral_db().expect("failed to create ephemeral database")); let dir_path = db.path().to_path_buf(); let handles: Vec<_> = (0..3) .map(|_| { let db_clone = Arc::clone(&db); thread::spawn(move || { // Perform some operations on db_clone assert!(db_clone.path().exists()); }) }) .collect(); for handle in handles { handle.join().unwrap(); } drop(db); assert!(!dir_path.exists()); }crates/katana/storage/provider/tests/class.rs (1)
Line range hint
156-159
: Ohayo sensei! Found an inconsistency in the test template.The
test_historical_class_read
template function still referencesInMemoryProvider
in its signature:fn test_historical_class_read( #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>, #[case] block_num: BlockNumber, #[case] expected_class: Vec<ClassHashAndClasses>, ) {Since we're removing the in-memory provider, this should be updated to use a generic type parameter.
Here's the suggested fix:
fn test_historical_class_read<Db>( - #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>, + #[from(provider_with_states)] provider: BlockchainProvider<Db>, #[case] block_num: BlockNumber, #[case] expected_class: Vec<ClassHashAndClasses>, ) {crates/katana/primitives/src/receipt.rs (2)
9-10
: Ohayo sensei! The attribute additions look solid and align well with the database backend simplification.The consistent addition of
arbitrary
andserde
derive macros across all receipt-related types will enable:
- Proper serialization/deserialization for database storage
- Enhanced testing through arbitrary instance generation
- Uniform handling of all receipt types in the new mdbx backend
This change supports the PR's goal of removing the in-memory provider by ensuring all types can be properly persisted.
Consider documenting these new capabilities in the module-level documentation to help other developers understand:
- When to use arbitrary generation for testing
- Serialization format expectations for database storage
Also applies to: 22-23, 35-36, 52-53, 69-70, 88-89, 107-108
69-70
: Consider adding serialization format tests for L1HandlerTxReceipt.The L1HandlerTxReceipt contains a B256 field which might need special attention for serialization.
Would you like me to provide an example test case that verifies the serialization format?
crates/katana/storage/provider/tests/storage.rs (2)
Line range hint
138-138
: Ohayo! Fix the test template signature, sensei!The test template still references
InMemoryProvider
even though it's been removed from the codebase.Update the template signature to use a generic type parameter:
- #[from(provider_with_states)] provider: BlockchainProvider<InMemoryProvider>, + #[from(provider_with_states)] provider: BlockchainProvider<impl StateFactoryProvider>,
Line range hint
1-165
: Excellent architectural direction, sensei!The removal of the in-memory provider simplifies the testing architecture by standardizing on database-backed providers. This approach:
- Reduces maintenance overhead
- Makes tests more representative of real-world scenarios
- Maintains comprehensive test coverage through remaining providers
The transition has been handled cleanly, with only minor cleanup needed in the test template signature.
crates/katana/rpc/rpc-types/src/receipt.rs (1)
212-217
: Clean implementation, sensei! Consider adding documentation.The price unit conversion is well-implemented with explicit mapping. Consider adding a brief doc comment to explain the conversion between internal and RPC types.
+/// Converts internal fee information to RPC fee payment format. +/// Maps between internal PriceUnit variants and their RPC counterparts. fn to_rpc_fee(fee: TxFeeInfo) -> FeePayment {crates/katana/storage/provider/tests/block.rs (1)
Line range hint
1-300
: Test coverage remains robust after in-memory provider removal.The test suite maintains comprehensive coverage of block-related functionality through:
- Parameterized test cases for different block counts
- Thorough verification of block insertion and retrieval
- Extensive state update testing
- Both empty and populated block scenarios
- Coverage across both DbProvider and ForkedProvider
The removal of in-memory provider tests aligns well with the PR objective of simplifying the database backend.
Consider adding a comment at the top of the file explaining why we exclusively test with DbProvider and ForkedProvider, to help future contributors understand the architectural decision to move away from in-memory testing.
crates/katana/core/src/backend/storage.rs (1)
186-191
: Ohayo! Minor formatting suggestion, sensei!The l1_da_mode conversion logic is correct and type-safe, but the indentation appears inconsistent with the surrounding code. Consider adjusting the formatting to match the file's style.
block.header.l1_da_mode = match forked_block.l1_da_mode { starknet::core::types::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, - starknet::core::types::L1DataAvailabilityMode::Calldata => { - L1DataAvailabilityMode::Calldata - } + starknet::core::types::L1DataAvailabilityMode::Calldata => L1DataAvailabilityMode::Calldata, };crates/katana/storage/db/src/models/storage.rs (3)
Line range hint
22-27
: Prevent potential panics by checking input length before slicing indecompress
.Accessing slices without verifying the length of
bytes
may cause panics if the input is shorter than expected. Please add a check to ensurebytes
has sufficient length before slicing.Apply this diff to add length checks:
impl Decompress for StorageEntry { fn decompress<B: AsRef<[u8]>>(bytes: B) -> Result<Self, crate::error::CodecError> { let bytes = bytes.as_ref(); + if bytes.len() < 32 { + return Err(CodecError::InvalidInput); + } let key = StorageKey::from_bytes_be_slice(&bytes[0..32]); let value = StorageValue::decompress(&bytes[32..])?; Ok(Self { key, value }) } }
Line range hint
51-58
: Prevent potential panics by checking input length before slicing indecode
.In
ContractStorageKey::decode
, accessing slices without verifying the length ofbytes
may cause panics if the input is shorter than expected. Please add a check to ensurebytes
has sufficient length before slicing.Apply this diff:
impl Decode for ContractStorageKey { fn decode<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> { let bytes = bytes.as_ref(); + if bytes.len() < 64 { + return Err(CodecError::InvalidInput); + } let contract_address = ContractAddress::decode(&bytes[0..32])?; let key = StorageKey::decode(&bytes[32..])?; Ok(Self { contract_address, key }) } }
Line range hint
80-87
: Prevent potential panics by checking input length before slicing indecompress
.In
ContractStorageEntry::decompress
, accessing slices without verifying the length ofbytes
may cause panics if the input is shorter than expected. Please add a check to ensurebytes
has sufficient length before slicing.Apply this diff:
impl Decompress for ContractStorageEntry { fn decompress<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> { let bytes = bytes.as_ref(); + if bytes.len() < 64 { + return Err(CodecError::InvalidInput); + } let key = ContractStorageKey::decode(&bytes[0..64])?; let value = StorageValue::decompress(&bytes[64..])?; Ok(Self { key, value }) } }crates/katana/primitives/src/block.rs (2)
Line range hint
33-39
: Ohayo sensei! Consider adding documentation toFinalityStatus
variants.To improve code readability and maintainability, please add documentation comments for
AcceptedOnL2
andAcceptedOnL1
. This will help others understand the meaning and usage of each status.pub enum FinalityStatus { + /// Indicates the block was accepted on Layer 2. AcceptedOnL2, + /// Indicates the block was accepted on Layer 1. AcceptedOnL1, }
Line range hint
58-64
: Ohayo sensei! Reminder to address the TODO comment regarding field names inGasPrices
.There's a TODO suggesting to change the field names to 'wei' and 'fri'. Updating
eth
andstrk
towei
andfri
will enhance clarity and align with the intended naming conventions.Would you like assistance in updating the field names and ensuring consistency across the codebase?
pub struct GasPrices { /// The price of one unit of the given resource, denominated in wei - pub eth: u128, + pub wei: u128, /// The price of one unit of the given resource, denominated in fri (the smallest unit of STRK, /// equivalent to 10^-18 STRK) - pub strk: u128, + pub fri: u128, }crates/katana/primitives/src/trace.rs (1)
Line range hint
22-30
: Consider renamingr#type
to avoid raw identifiersOhayo, sensei! Using
r#type
as a field name requires raw identifiers becausetype
is a reserved keyword in Rust. Renaming this field totx_type
ortransaction_type
can enhance readability and eliminate the need for raw identifiers.crates/saya/provider/src/rpc/transaction.rs (1)
169-174
: Consider handling future variants infrom_rpc_da_mode
.Currently, the
match
statement covers all existing variants ofDataAvailabilityMode
. To future-proof the code and avoid potential panics if new variants are introduced, consider adding a wildcard arm or implementing the#[non_exhaustive]
attribute.Apply this diff to handle unexpected variants:
fn from_rpc_da_mode(mode: starknet::core::types::DataAvailabilityMode) -> DataAvailabilityMode { match mode { starknet::core::types::DataAvailabilityMode::L1 => DataAvailabilityMode::L1, starknet::core::types::DataAvailabilityMode::L2 => DataAvailabilityMode::L2, + _ => { + // Handle unexpected variant, possibly returning an error or defaulting + DataAvailabilityMode::L1 // or consider returning a Result + } } }crates/katana/primitives/src/transaction.rs (3)
Line range hint
187-224
: Consider implementing default values inInvokeTxV3
, sensei.In
InvokeTxV3
, fields liketip
,paymaster_data
, andaccount_deployment_data
might benefit from default values to improve usability and reduce boilerplate code when these fields remain at their default state.
Line range hint
320-355
: Add documentation for new fields inDeclareTxV3
, sensei.The fields
resource_bounds
,tip
,paymaster_data
,nonce_data_availability_mode
, andfee_data_availability_mode
inDeclareTxV3
are critical for transaction execution. Providing detailed doc comments would enhance code readability and assist other developers in understanding their purposes.
Line range hint
474-510
: Consider default implementations inDeployAccountTxV3
, sensei.Similar to earlier structs, if fields like
tip
andpaymaster_data
inDeployAccountTxV3
can have default values, implementing theDefault
trait or providing default field values could improve usability.crates/katana/storage/db/src/mdbx/mod.rs (2)
Line range hint
298-307
: Enhance test coverage by checking stats after insertion.In the
db_stats
test, you've inserted data but only check for non-zero stats. Consider verifying specific values to strengthen the test.
Line range hint
330-344
: Ohayo, sensei! Handle potential errors explicitly in tests.In
db_manual_put_get
, consider handling the case whereresult
isNone
to avoid panics during test execution.Apply this diff to handle the
None
case:let result = tx.get::<Headers>(key).expect(ERROR_GET); -let total_entries = tx.entries::<Headers>().expect(ERROR_GET); -tx.commit().expect(ERROR_COMMIT); -assert!(total_entries == 1); -assert!(result.expect(ERROR_RETURN_VALUE) == value); +if let Some(retrieved_value) = result { + assert_eq!(retrieved_value, value); +} else { + panic!("Expected value not found in database."); +} +tx.commit().expect(ERROR_COMMIT);crates/katana/primitives/src/utils/transaction.rs (1)
Line range hint
133-133
: Ohayo sensei! Noticed the TODO about Ethereum types standardization.The comment suggests preferring
alloy
types overstarknet-rs
types for convenience. Standardizing the Ethereum types can enhance code consistency and maintainability. Would you like assistance in refactoring the code to usealloy
types consistently?I can help update the codebase to adopt
alloy
types. Shall I proceed to draft a proposal or open a GitHub issue to track this task?crates/katana/rpc/rpc-types/src/transaction.rs (1)
537-541
: Ohayo, sensei! Consider addressing the TODO comment regarding type conversions.The TODO comment highlights a desire to avoid manual conversions due to the orphan rule limitations in Rust. Exploring alternative approaches, such as using macros to generate these conversion functions or redefining types to circumvent the orphan rule, could enhance code maintainability and flexibility.
Would you like assistance in proposing a solution or opening an issue to track this task?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (51)
- Cargo.toml (2 hunks)
- crates/katana/cairo/Cargo.toml (1 hunks)
- crates/katana/core/src/backend/storage.rs (4 hunks)
- crates/katana/executor/benches/concurrent.rs (1 hunks)
- crates/katana/executor/benches/execution.rs (1 hunks)
- crates/katana/executor/src/implementation/blockifier/state.rs (3 hunks)
- crates/katana/executor/src/implementation/blockifier/utils.rs (5 hunks)
- crates/katana/executor/tests/fixtures/mod.rs (3 hunks)
- crates/katana/executor/tests/simulate.rs (1 hunks)
- crates/katana/node/Cargo.toml (0 hunks)
- crates/katana/node/src/lib.rs (1 hunks)
- crates/katana/primitives/Cargo.toml (3 hunks)
- crates/katana/primitives/src/block.rs (3 hunks)
- crates/katana/primitives/src/chain.rs (2 hunks)
- crates/katana/primitives/src/chain_spec.rs (2 hunks)
- crates/katana/primitives/src/contract.rs (2 hunks)
- crates/katana/primitives/src/da/mod.rs (1 hunks)
- crates/katana/primitives/src/event.rs (1 hunks)
- crates/katana/primitives/src/fee.rs (1 hunks)
- crates/katana/primitives/src/lib.rs (1 hunks)
- crates/katana/primitives/src/message.rs (1 hunks)
- crates/katana/primitives/src/receipt.rs (7 hunks)
- crates/katana/primitives/src/trace.rs (7 hunks)
- crates/katana/primitives/src/transaction.rs (21 hunks)
- crates/katana/primitives/src/utils/transaction.rs (1 hunks)
- crates/katana/primitives/src/version.rs (1 hunks)
- crates/katana/rpc/rpc-types/src/block.rs (4 hunks)
- crates/katana/rpc/rpc-types/src/receipt.rs (2 hunks)
- crates/katana/rpc/rpc-types/src/transaction.rs (10 hunks)
- crates/katana/rpc/rpc/src/starknet/mod.rs (2 hunks)
- crates/katana/rpc/rpc/src/starknet/trace.rs (2 hunks)
- crates/katana/storage/db/Cargo.toml (3 hunks)
- crates/katana/storage/db/src/lib.rs (3 hunks)
- crates/katana/storage/db/src/mdbx/mod.rs (16 hunks)
- crates/katana/storage/db/src/models/block.rs (1 hunks)
- crates/katana/storage/db/src/models/contract.rs (2 hunks)
- crates/katana/storage/db/src/models/storage.rs (3 hunks)
- crates/katana/storage/db/src/tables.rs (1 hunks)
- crates/katana/storage/provider/src/providers/db/mod.rs (4 hunks)
- crates/katana/storage/provider/src/providers/in_memory/mod.rs (0 hunks)
- crates/katana/storage/provider/src/providers/in_memory/state.rs (1 hunks)
- crates/katana/storage/provider/src/test_utils.rs (2 hunks)
- crates/katana/storage/provider/tests/block.rs (1 hunks)
- crates/katana/storage/provider/tests/class.rs (1 hunks)
- crates/katana/storage/provider/tests/contract.rs (1 hunks)
- crates/katana/storage/provider/tests/fixtures.rs (2 hunks)
- crates/katana/storage/provider/tests/storage.rs (1 hunks)
- crates/katana/storage/provider/tests/utils.rs (1 hunks)
- crates/saya/core/src/blockchain/mod.rs (2 hunks)
- crates/saya/provider/src/rpc/mod.rs (2 hunks)
- crates/saya/provider/src/rpc/transaction.rs (5 hunks)
💤 Files with no reviewable changes (2)
- crates/katana/node/Cargo.toml
- crates/katana/storage/provider/src/providers/in_memory/mod.rs
🧰 Additional context used
🔇 Additional comments (110)
crates/katana/primitives/src/message.rs (1)
5-7
: Ohayo! The feature-gated derive attributes look good, sensei!The addition of
arbitrary
andserde
derive macros is well-structured and aligns perfectly with the PR's goal of standardizing database interactions. The feature-gating ensures these capabilities are only included when needed.Let's verify the feature configurations in the crate:
✅ Verification successful
The feature configuration verification shows that both
arbitrary
andserde
features are properly configured in the crate's Cargo.toml. Let's verify one more aspect to ensure the derive macros are used consistently across similar message types in the codebase:This will help us confirm if the derive attributes are consistently applied across similar message types in the codebase.
Ohayo! The feature configuration is perfect, sensei! ٩(◕‿◕。)۶
The verification confirms that:
- Both
arbitrary
andserde
features are properly defined in Cargo.tomlserde
is included in default features, which is appropriate for database serialization- The
arbitrary
feature correctly depends on the workspace-level arbitrary and alloy-primitivesOrderedL2ToL1Message
is the only L2-to-L1 message type, so the derive attributes are consistently appliedThe changes are well-structured and maintain consistency within the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify feature configurations in Cargo.toml # Expected: Should find feature definitions for both arbitrary and serde # Search for feature definitions in the corresponding Cargo.toml rg -A 5 '^\[features\]' crates/katana/primitives/Cargo.toml # Verify that these features are actually used in dependencies rg 'arbitrary|serde' crates/katana/primitives/Cargo.tomlLength of output: 536
Script:
#!/bin/bash # Search for other message-related structs to verify consistent derive usage ast-grep --pattern 'struct $_Message { $$$ }' # Also check for any other L2-to-L1 related types rg -B 2 -A 1 'struct.*L2.*L1'Length of output: 439
crates/katana/primitives/src/lib.rs (1)
26-26
: Ohayo! The import path change looks good, sensei!The migration from
starknet::core::types
tostarknet_types_core::felt
aligns with the goal of simplifying the codebase. Let's verify that this change is consistent across other files.crates/katana/storage/db/src/models/block.rs (1)
7-7
: Ohayo! The Arbitrary derive looks good, sensei!The addition of property-based testing capabilities aligns well with the transition to database-backed testing.
Let's verify that other related structs also derive Arbitrary consistently:
✅ Verification successful
Let me gather more context about the related structs to ensure we have a complete picture.
This will help us see if the Arbitrary derive is consistently applied to all relevant structs and if there are any patterns we should be aware of.
Ohayo! The Arbitrary derive is perfectly balanced, like all things should be, sensei!
The verification shows that the Arbitrary derive is consistently applied across all database model structs that need property-based testing:
- StorageEntry
- ContractStorageKey
- ContractStorageEntry
- ContractClassChange
- ContractNonceChange
- StoredBlockBodyIndices
The implementation follows the same pattern everywhere with
#[cfg_attr(test, derive(::arbitrary::Arbitrary))]
, which is exactly what we want to see for test-only derives.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for consistent Arbitrary derive across related structs # Expected: Other database model structs should also derive Arbitrary for testing # Search for struct definitions in the models directory rg --type rust "struct.*StoredBlock.*" "crates/katana/storage/db/src/models/" # Check for Arbitrary derive patterns rg --type rust "derive.*Arbitrary" "crates/katana/storage/db/src/models/"Length of output: 814
Script:
#!/bin/bash # Let's check the actual struct definitions and their Arbitrary derives rg -A 5 --type rust "#\[cfg_attr\(test, derive\(::arbitrary::Arbitrary\)\)\]" "crates/katana/storage/db/src/models/"Length of output: 2972
crates/katana/primitives/src/da/mod.rs (1)
7-16
: Ohayo! The L1DataAvailabilityMode enum looks well-structured, sensei!The enum provides a clean abstraction for L1 data availability modes with proper serialization support. The conditional feature flags for arbitrary and serde are appropriately implemented.
crates/katana/storage/db/Cargo.toml (3)
10-10
: Ohayo! Adding arbitrary feature enhances testing capabilities.The addition of the "arbitrary" feature to katana-primitives aligns well with improving the testing framework, especially useful for property-based testing scenarios.
20-20
: LGTM! Simplified tempfile dependency.Making tempfile a direct workspace dependency without the optional flag simplifies the dependency tree.
40-40
: Verify the impact of empty test-utils feature.The test-utils feature no longer depends on tempfile. While this aligns with the PR's goal of simplification, we should verify that all test utilities still function correctly.
✅ Verification successful
Let me check the content of the test-utils feature and its usage to make a proper assessment.
The previous results show tempfile usage in various test files and some crates using test-utils feature, but we need to specifically check the storage/db crate's test files and feature definition to understand the impact of the empty feature.
Let me check one more thing to see how this feature is actually used in the codebase.
This will help us understand if the empty test-utils feature is actually needed or if it's safe to be empty.
Ohayo sensei! After analyzing the codebase, I can now provide a definitive response about the empty test-utils feature.
Empty test-utils feature is safe and intentional
The test-utils feature in katana-db is correctly implemented as an empty feature. Looking at the code:
- The feature is only used to gate the
test_utils
module which contains a helper functioncreate_test_db()
- This module is conditionally compiled with
#[cfg(any(test, feature = "test-utils"))]
- The functionality it gates doesn't depend on tempfile directly as tempfile is already a main dependency in the crate
The empty feature serves as a flag to expose test utilities to other crates without requiring additional dependencies, as all necessary dependencies are already in the main dependency list.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining tempfile usage in tests that might be affected # Search for tempfile usage in test files rg -l 'use.*tempfile' $(fd -t f -e rs test) # Search for test-utils feature usage rg -l 'features.*=.*"test-utils"' $(fd -t f -e toml)Length of output: 7996
Script:
#!/bin/bash # Check the content of the storage/db Cargo.toml to see test-utils feature definition cat crates/katana/storage/db/Cargo.toml # Check for any test files in the storage/db crate that might use test-utils fd -t f -e rs test crates/katana/storage/db/ # Look for any test-utils usage in those test files rg -l "test-utils" $(fd -t f -e rs test crates/katana/storage/db/)Length of output: 1909
Script:
#!/bin/bash # Check the content of the mdbx/mod.rs file since it appears to use test-utils cat crates/katana/storage/db/src/mdbx/mod.rs # Also check for any test modules in the db crate fd -t f -e rs "test" crates/katana/storage/db/src/Length of output: 20629
crates/katana/cairo/Cargo.toml (2)
22-23
: Ohayo! Clean formatting with proper spacing.The added newlines improve readability by separating the dependencies and features sections.
23-26
: Well-documented feature addition, sensei!The new
cairo-vm-test-utils
feature is clearly documented and serves a specific purpose for enabling Arbitrary trait implementations. Let's verify its usage across the codebase.✅ Verification successful
Let me gather more context about the feature usage.
This will help us understand how the feature is being used and if it's properly connected between the crates.
Ohayo! The feature implementation is properly connected and documented, sensei!
The verification shows that the
cairo-vm-test-utils
feature is well-integrated:
- It's correctly defined in
katana-cairo/Cargo.toml
to expose thetest_utils
feature fromcairo-vm
- The feature is properly used by
katana-primitives
crate through itsarbitrary
feature, which explicitly depends onkatana-cairo/cairo-vm-test-utils
- The dependency chain is well-documented with a clear comment explaining why the feature is needed
The feature setup follows Rust's best practices for conditional compilation and test utilities.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of the new feature and its dependencies # Test 1: Check if any files are using the new feature echo "Checking for direct feature usage..." rg -l 'cairo-vm-test-utils' # Test 2: Look for Arbitrary trait implementations that might need this feature echo "Checking for potential Arbitrary implementations..." rg -l 'impl.*Arbitrary'Length of output: 347
Script:
#!/bin/bash # Check the content of both Cargo.toml files to understand the feature usage echo "Content of crates/katana/cairo/Cargo.toml:" cat crates/katana/cairo/Cargo.toml echo -e "\nContent of crates/katana/primitives/Cargo.toml:" cat crates/katana/primitives/Cargo.toml # Look for actual Arbitrary implementations with context echo -e "\nChecking Arbitrary implementations with context:" rg "impl.*Arbitrary" -A 5 -B 2Length of output: 3162
crates/katana/primitives/Cargo.toml (3)
10-11
: Ohayo sensei! New workspace dependencies look good!The addition of
katana-cairo
,arbitrary
, andstarknet-types-core
as workspace dependencies aligns well with the PR's objective of simplifying the database backend.Also applies to: 13-13, 24-24
27-27
: LGTM! Enabling arbitrary feature for alloy-primitivesThe addition of the arbitrary feature to
alloy-primitives
is consistent with the new arbitrary feature being introduced.
40-45
: Verify the necessity of cairo-vm-test-utilsThe arbitrary feature includes
katana-cairo/cairo-vm-test-utils
, which seems to be required based on a comment. Let's verify this dependency.✅ Verification successful
Let me gather more information to understand the dependency relationship better.
Ohayo sensei! I can now provide a clear response based on the findings.
The cairo-vm-test-utils dependency is correctly included and necessary
The comment in
crates/katana/cairo/Cargo.toml
explains why this dependency is needed:Some types that we used from cairo-vm implements the
Arbitrary
trait, only under thetest_utils
feature.Since the
arbitrary
feature in the primitives crate needs theseArbitrary
trait implementations from cairo-vm, the dependency is correctly included.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for the referenced comment in katana-cairo's Cargo.toml # that explains why cairo-vm-test-utils is needed rg -p "cairo-vm-test-utils" crates/katana/cairo/Cargo.tomlLength of output: 109
Script:
#!/bin/bash # Let's check for any comments or usage patterns related to cairo-vm-test-utils rg -p -B 2 -A 2 "cairo-vm-test-utils" crates/katana/cairo/Cargo.toml # Also check for any test files that might be using this dependency rg -p "cairo-vm-test-utils" crates/katana/cairo/src # Let's also check the primitives crate for any test files using this rg -p "cairo-vm-test-utils" crates/katana/primitives/srcLength of output: 382
crates/katana/executor/benches/execution.rs (1)
21-21
: Ohayo! Verify benchmark validity with the new provider, sensei!The switch from in-memory to database-backed provider aligns with the PR objectives, but we should ensure this doesn't invalidate our benchmark results.
Consider the following implications:
- Benchmarks now include disk I/O overhead
- Results may vary based on the underlying storage performance
- Cold vs hot cache characteristics might differ
Would you like me to help create additional benchmark variants to measure the overhead introduced by the database backend?
crates/katana/primitives/src/contract.rs (1)
19-21
: Ohayo! The arbitrary feature addition looks good, sensei!The conditional attribute additions enhance testing capabilities while maintaining clean architecture. This change aligns well with the PR's objective of improving the testing framework.
Let's verify the feature usage:
✅ Verification successful
Ohayo! The arbitrary feature implementation is perfectly aligned, sensei!
The verification shows that:
- The arbitrary feature is properly configured in the root Cargo.toml
- The
#[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))]
attribute is consistently used across multiple structs in the primitives crate, showing a uniform pattern- The ContractAddress struct follows the same pattern as other structs in the codebase
The implementation maintains consistency with the codebase's existing patterns and properly supports property-based testing capabilities.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the arbitrary feature is properly configured in Cargo.toml rg -l 'arbitrary.*=.*"' $(fd -g 'Cargo.toml') # Verify other usages of arbitrary derivation for consistency ast-grep --pattern '#[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))]'Length of output: 5227
crates/katana/storage/db/src/models/contract.rs (1)
Line range hint
1-65
: Overall implementation looks solid!The new contract change tracking structures are well-designed and properly implemented. They align perfectly with the PR's objective of simplifying the database backend by providing consistent serialization patterns.
crates/katana/storage/provider/tests/utils.rs (1)
2-2
: Ohayo! Clean import refactoring, sensei!The shift from external
starknet
types to internalkatana_primitives
types aligns well with the PR's goal of simplifying the backend and improving encapsulation.crates/katana/storage/provider/src/test_utils.rs (1)
4-4
: Ohayo! Clean import addition, sensei!The new import for
test_utils
aligns perfectly with the PR's objective of transitioning to a database-backed provider.crates/katana/executor/tests/simulate.rs (1)
8-8
: Ohayo sensei! Verify the consistency of PriceUnit usage across the codebase.The change from
starknet::core::types::PriceUnit
tokatana_primitives::fee::PriceUnit
looks good, as it aligns with the PR's goal of simplifying and standardizing the codebase. However, let's ensure this change is consistently applied across all usages.✅ Verification successful
Let me gather more information to ensure we have a complete picture of PriceUnit usage in the codebase.
The additional context will help us verify if the migration is complete and if there are any other price unit types we should be aware of.
Ohayo sensei! The PriceUnit migration is properly implemented across the codebase!
The verification shows that:
- No instances of
starknet::core::types::PriceUnit
imports remain- All usages are consistently from
katana_primitives::fee::PriceUnit
- The implementation in
katana_primitives::fee
is properly referenced across the codebase- The RPC layer correctly handles the conversion between internal and external PriceUnit types
The only external PriceUnit reference is in
rpc-types/src/receipt.rs
where it's explicitly converting our internal PriceUnit to starknet's PriceUnit for RPC responses, which is the correct pattern.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that all PriceUnit usages are from katana_primitives::fee # and no starknet::core::types::PriceUnit remains echo "Checking for any remaining starknet PriceUnit imports..." rg "use starknet::core::types::PriceUnit" echo "Verifying katana PriceUnit usage..." rg "use katana_primitives::fee::PriceUnit"Length of output: 372
Script:
#!/bin/bash # Let's check for any direct usage of PriceUnit (without import statements) echo "Checking for PriceUnit usage patterns..." rg "PriceUnit" -A 2 # Let's also check if there are any other price unit related types or imports echo -e "\nChecking for any other price-related types..." rg -i "price.*unit|unit.*price" # Let's verify the actual implementation location echo -e "\nChecking the implementation location..." fd -e rs -e toml "PriceUnit" --exec grep -l "PriceUnit" {}Length of output: 13869
crates/katana/primitives/src/event.rs (1)
7-8
: Ohayo! The derive attributes look good, sensei!The conditional compilation for
arbitrary
andserde
features is well-structured. This ensures these dependencies are only included when needed.Let's verify the feature usage across the codebase:
✅ Verification successful
Let me gather more information about the feature definitions to ensure they are properly configured.
Let me try one more verification with a different approach:
Ohayo! The feature attributes are properly configured, sensei!
Based on the verification results, I can confirm that:
- The
serde
feature is enabled by default in the primitives crate- The
arbitrary
feature is properly defined with necessary dependenciesOrderedEvent
is used in both the trace and executor modules, indicating its importance in the codebaseThe conditional attributes for both
arbitrary
andserde
are well-structured and align with the crate's feature configuration. The default features includeserde
, which ensures serialization capabilities are available where needed, whilearbitrary
remains optional for testing purposes.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if these features are properly defined in Cargo.toml rg -l 'features.*=.*"(arbitrary|serde)"' Cargo.toml # Verify the usage of OrderedEvent with these features rg -A 2 'use.*OrderedEvent'Length of output: 589
Script:
#!/bin/bash # Check the actual feature definitions in Cargo.toml cat Cargo.toml | grep -A 10 '\[features\]' # Check if OrderedEvent is used with serde serialization/deserialization rg "OrderedEvent.*serde" -A 2 # Check if OrderedEvent is used with arbitrary rg "OrderedEvent.*arbitrary" -A 2 # Check if there are any tests using OrderedEvent rg "OrderedEvent" "**/tests/**"Length of output: 196
Script:
#!/bin/bash # Check the Cargo.toml in the primitives crate specifically cat crates/katana/primitives/Cargo.toml | grep -A 10 '\[features\]' # Look for test files in the primitives crate fd test "crates/katana/primitives" # Check if OrderedEvent is used in any test files rg "OrderedEvent" "crates/katana/primitives/**/*test*"Length of output: 751
crates/katana/executor/benches/concurrent.rs (1)
30-30
: Ohayo sensei! The provider change looks good!The replacement of
test_in_memory_provider()
withtest_provider()
aligns well with the PR's objective of simplifying the database backend by removing the in-memory provider.crates/saya/core/src/blockchain/mod.rs (2)
5-5
: Ohayo! Import looks good, sensei!The addition of DbProvider import aligns well with the PR's objective of transitioning away from the in-memory provider.
73-73
: Implementation change looks solid, sensei!The switch to
DbProvider::new_ephemeral()
aligns with the PR's goal of simplifying the database backend. However, let's verify that existing tests aren't affected by this change.✅ Verification successful
The test files seem to be using
InMemoryProvider
, so I need to see their content and any related test utilities to understand if this change requires updates to the test code. This will help me provide a more accurate assessment of the impact.
I need to see the fixtures implementation to understand how the test data is set up and verify if the change from
InMemoryProvider
toDbProvider::new_ephemeral()
maintains the same behavior. This will help ensure the change is safe and doesn't affect test coverage.
Ohayo! The change to DbProvider::new_ephemeral() is safe, sensei!
After analyzing the fixtures and test implementation, I can confirm that the change is correct. Here's why:
- The
provider_with_states
fixture is provider-agnostic and works with any type that implementsBlockWriter + StateFactoryProvider
DbProvider::new_ephemeral()
provides the same in-memory behavior as the oldInMemoryProvider
- All tests are already running with both
DbProvider
andForkedProvider
, showing the implementation is properly abstractedThe test files using
InMemoryProvider
are actually template functions that get applied to bothDbProvider
andForkedProvider
test cases, so no test coverage is lost.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any test files that might be affected by this change # Look for test modules that might have been using InMemoryProvider # Search for test modules that might need updates rg -l "test" --type rust | xargs rg "InMemoryProvider" -l # Check for any remaining references to InMemoryProvider that might have been missed rg "InMemoryProvider" --type rustLength of output: 633
Script:
#!/bin/bash # Let's check the test files to understand the context better cat crates/katana/storage/provider/tests/contract.rs cat crates/katana/storage/provider/tests/storage.rs cat crates/katana/storage/provider/tests/class.rs # Also check if there are any test utilities or fixtures that might be using InMemoryProvider rg -g "*.rs" "provider_with_states" -A 5 -B 5Length of output: 29649
Script:
#!/bin/bash # Let's check the fixtures to understand how the test data is set up cat crates/katana/storage/provider/tests/fixtures.rs # Also check if there are any other files that might be using InMemoryProvider rg "InMemoryProvider" --type rust -C 5Length of output: 10386
crates/katana/storage/provider/tests/contract.rs (1)
4-4
: Ohayo! Import changes look good, sensei!The simplified imports correctly reflect the removal of in-memory provider, aligning with the PR's objective to streamline the database backend.
crates/katana/primitives/src/version.rs (1)
7-7
: Ohayo! Nice addition of property-based testing capabilities, sensei!The conditional derive of
Arbitrary
is well-implemented and properly feature-gated. This enhancement will support more robust testing scenarios, particularly useful as we transition away from the in-memory provider.crates/katana/storage/db/src/lib.rs (1)
85-85
: LGTM!crates/katana/storage/provider/tests/class.rs (2)
5-6
: Ohayo sensei! Import changes look good!The imports have been correctly updated to remove in-memory provider related dependencies, aligning with the PR's objective of simplifying the database backend.
Line range hint
1-159
: Verify test coverage after in-memory provider removal.The test file maintains comprehensive coverage through fork and database provider tests, but let's verify that all test cases are still being executed.
✅ Verification successful
Based on the output, I can now generate the final response since I have enough information to verify the test coverage.
Test coverage remains comprehensive after in-memory provider removal
The test coverage is maintained through fork and database provider implementations, with all test cases properly covered:
- The test file has both
test_latest_class_read
andtest_historical_class_read
templates that are applied to both fork and db providers- Each test case in the historical tests covers different block numbers (0, 1, 4, 5) with varying class states
- The test assertions verify all aspects: compiled class hash, compiled class, and sierra class
- The remaining InMemoryProvider references in the codebase are either in different test files or in the provider implementation itself, not affecting this test coverage
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify test coverage for class-related functionality # Expected: All test cases should be covered by either fork or db provider tests # Check for any remaining in-memory provider references echo "Checking for any remaining in-memory provider references..." rg -i "inmemory|in_memory" --type rust # List all test functions to verify coverage echo "Listing all test functions..." rg -B 1 "^[[:space:]]*fn.*test.*\(" --type rustLength of output: 39341
crates/katana/primitives/src/receipt.rs (1)
9-10
: Verify arbitrary derivation for Event struct.The Event struct contains vectors which might need special handling in arbitrary generation.
✅ Verification successful
Let me gather more context about the arbitrary implementation and the Event struct.
The first script will show us the Event struct definition to understand its fields. The second script will help us find any custom Arbitrary implementations in the codebase. The third script will help us locate any Vec usages to understand the context better.
Ohayo sensei! After analyzing the codebase, I can now provide a conclusive response about the Event struct's arbitrary derivation.
The arbitrary derivation for Event struct is safe to use
The Event struct contains two Vec fields (
keys
anddata
), but this is safe because:
- The Felt type is a primitive type that already implements Arbitrary
- Vec automatically implements Arbitrary when T implements Arbitrary (this is built into the arbitrary crate)
- The other field
from_address
(ContractAddress) is also a simple type that would implement ArbitraryNo custom arbitrary implementation is needed for Vec as the default implementation from the arbitrary crate will handle this case correctly.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if there are any custom arbitrary implementations for Vec<Felt> rg -l "impl.*Arbitrary.*for.*Vec.*Felt"Length of output: 41
Script:
#!/bin/bash # Check the Event struct definition and its fields rg -A 10 "struct Event" crates/katana/primitives/src/receipt.rs # Check if there are any custom arbitrary implementations in the codebase ast-grep --pattern 'impl Arbitrary for $_' # Check if there are any Vec<Felt> usages rg "Vec<.*Felt"Length of output: 13079
crates/katana/storage/provider/tests/storage.rs (2)
4-4
: Ohayo! Import changes look good, sensei!The removal of in-memory provider imports aligns well with the PR's objective of simplifying the database backend.
Line range hint
27-71
: Verify test coverage after removing InMemoryProvider, sensei!The removal of InMemoryProvider tests maintains a clean testing structure. However, let's verify that all test scenarios are still covered by the remaining providers.
crates/saya/provider/src/rpc/mod.rs (2)
11-11
: Ohayo! Clean import addition for L1DataAvailabilityMode, sensei!The new import aligns well with the type conversion needs and maintains clear separation between RPC and internal types.
108-115
: Clean and explicit enum conversion implementation!The match statement provides a clear and type-safe conversion between
starknet::core::types::L1DataAvailabilityMode
andkatana_primitives::da::L1DataAvailabilityMode
. This approach:
- Ensures exhaustive handling of all enum variants
- Makes the conversion explicit and maintainable
- Prevents potential runtime errors from implicit conversions
Let's verify that all L1DataAvailabilityMode enum variants are handled:
✅ Verification successful
Let me gather more information about the starknet core types to ensure we have a complete verification.
This will help us verify that both enums have matching variants and ensure our conversion is complete.
Let me try a different approach to find the starknet core types.
This will help us find all occurrences and definitions across the codebase, sensei! 🍜
Ohayo, sensei! After analyzing the codebase, I can now provide the final verification response!
The L1DataAvailabilityMode conversion is complete and correct! ✨
The search results confirm that both enums have exactly the same variants:
- Both have
Blob
andCalldata
variants- The conversion in the match statement handles all possible cases
- The implementation in
katana_primitives::da::L1DataAvailabilityMode
matches perfectly withstarknet::core::types::L1DataAvailabilityMode
The conversion is being used consistently throughout the codebase, making it a reliable and type-safe implementation. Keep up the great work, sensei! 🎯
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that both enums have matching variants echo "Checking L1DataAvailabilityMode definitions..." rg --type rust "enum L1DataAvailabilityMode" -A 5Length of output: 512
Script:
#!/bin/bash # Search for L1DataAvailabilityMode in starknet core types echo "Checking starknet core L1DataAvailabilityMode..." rg --type rust "L1DataAvailabilityMode" -B 2 -A 2 crates/starknetLength of output: 228
Script:
#!/bin/bash # Search for L1DataAvailabilityMode in all Rust files echo "Searching for all L1DataAvailabilityMode definitions and usages..." rg --type rust "L1DataAvailabilityMode" -B 2 -A 2Length of output: 16630
crates/katana/storage/provider/tests/fixtures.rs (1)
68-69
: Ohayo sensei! Clean implementation of the test database provider.The simplified initialization using
test_utils::create_test_db()
aligns well with the PR's goal of standardizing on the database backend.Cargo.toml (2)
121-121
: Ohayo sensei! Nice addition of property-based testing capabilities!The addition of the
arbitrary
crate with thederive
feature is a great move for enhancing the testing framework, especially as we transition away from the in-memory provider.
248-248
: Consistent feature addition for starknet-types-core!Adding the
arbitrary
feature tostarknet-types-core
aligns perfectly with the new arbitrary dependency, enabling comprehensive property-based testing across Starknet types.crates/katana/rpc/rpc-types/src/receipt.rs (1)
3-3
: Ohayo! Clean import addition, sensei!The
PriceUnit
import is well-placed and aligns with the PR's objective of standardizing fee handling.crates/katana/rpc/rpc-types/src/block.rs (2)
49-54
: Ohayo! The L1DataAvailabilityMode conversion looks good, sensei!The match expression correctly handles the conversion between
katana_primitives::da::L1DataAvailabilityMode
andstarknet::core::types::L1DataAvailabilityMode
. The implementation is exhaustive and type-safe.
Line range hint
1-300
: Verify the impact of L1DataAvailabilityMode changes across the codebase, sensei!Given that this PR aims to simplify the database backend, we should ensure that these L1DataAvailabilityMode changes are consistently applied across all related components.
Let's check for any potential inconsistencies:
✅ Verification successful
Ohayo! After analyzing the search results, I can provide a definitive response about the L1DataAvailabilityMode changes.
L1DataAvailabilityMode implementation is consistent across the codebase, sensei!
The search results show that the L1DataAvailabilityMode handling is properly implemented with consistent patterns across all relevant components:
- All conversions between
katana_primitives::da::L1DataAvailabilityMode
andstarknet::core::types::L1DataAvailabilityMode
are properly handled- Default values consistently use
L1DataAvailabilityMode::Calldata
- The enum variants (Blob and Calldata) are consistently matched and converted
- The implementation spans across multiple components (RPC, storage, block producer, etc.) with consistent patterns
The changes in the reviewed file align perfectly with the rest of the codebase's implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for L1DataAvailabilityMode usage across the codebase echo "Checking L1DataAvailabilityMode usage patterns..." rg -A 2 "L1DataAvailabilityMode::" # Search for potential missed conversions echo "Checking for potential missed conversions..." rg "l1_da_mode\s*[:=]" --type rustLength of output: 9322
crates/katana/executor/tests/fixtures/mod.rs (3)
27-27
: LGTM! Import aligns with architectural changes.Ohayo sensei! The addition of DbProvider import aligns perfectly with the PR's objective of removing the in-memory provider.
Line range hint
73-84
: Verify transaction handling consistency.While the provider change looks good, let's ensure the DB backend handles all transaction scenarios consistently, especially for the complex cases in the test fixtures.
Let's verify the transaction handling:
✅ Verification successful
Based on the shell script results, I can now generate the final response since we have comprehensive evidence about transaction handling across the codebase.
Transaction handling is thoroughly tested and consistent
The codebase demonstrates robust transaction handling with extensive test coverage:
- Comprehensive transaction hash computation tests for all transaction types (deploy, declare, invoke) and versions (v1, v2, v3)
- Thorough transaction pool management tests verifying ordering, validation, and state transitions
- Extensive DB provider tests ensuring proper transaction storage and retrieval
- Complete test coverage for transaction simulation and execution
- Detailed transaction processing tests in the RPC layer
The transition to DbProvider maintains all these guarantees, as evidenced by the extensive test suite that covers the entire transaction lifecycle.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for transaction-related test coverage # Look for transaction test cases rg -g '*.rs' -A 5 'test.*transaction|test.*tx' # Check for any transaction-related assertions rg -g '*.rs' 'assert.*transaction|assert.*tx'Length of output: 29139
Line range hint
73-84
: LGTM! Verify test performance impact.The transition to DbProvider.new_ephemeral() is clean and aligns with the architectural simplification goals. The implementation maintains the same functionality while removing custom in-memory logic.
Let's verify the performance impact of this change:
crates/katana/rpc/rpc/src/starknet/trace.rs (2)
17-18
: Ohayo sensei! Clean import changes.The addition of
PriceUnit
to the imports aligns well with the standardization effort. The imports are organized logically.
Line range hint
142-167
: Verify trace retrieval with mdbx backend.The transition from in-memory provider to mdbx backend requires verification of trace retrieval functionality, especially in the
block_traces
method where pending blocks are handled.✅ Verification successful
Let me gather more information about the trace functionality implementation to ensure there are no issues with the MDBX transition.
The previous results show that in-memory references are mainly in tests and the fork provider, but we need to verify the actual trace implementation in MDBX provider to ensure it properly handles the trace retrieval functionality.
Let me try a different approach to verify the trace implementation.
The previous search patterns weren't specific enough. Let's look for the trait definition and actual implementation to verify how traces are handled in the MDBX provider, ohayo sensei! 🍜
Ohayo sensei! After analyzing the trace implementation, I can now provide a conclusive response.
Trace functionality is properly implemented in the MDBX backend ✨
The search results show that the trace functionality is cleanly abstracted through the
TransactionTraceProvider
trait and properly implemented in the database provider. Key findings:
- The
transaction_executions_by_block
implementation in the DB provider (which includes MDBX) correctly handles trace retrieval using block indices- The transition from in-memory to MDBX doesn't affect trace functionality as the implementation follows the same trait contract
- Test coverage exists for trace functionality across different block types (latest, pending)
The remaining in-memory references found are only in test files and fork provider implementations, which don't affect the core MDBX trace functionality. Keep coding with confidence, sensei! 🎌
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify trace-related functionality in the codebase # Look for any remaining in-memory provider references that might affect tracing rg -g '!target' -g '!*.lock' "in_memory|InMemory" . # Check for any potential issues with trace retrieval in tests rg -g '!target' -g '!*.lock' "block_traces|trace_block_transactions" .Length of output: 3347
Script:
#!/bin/bash # Check the implementation of trace-related methods in the provider ast-grep --pattern 'fn transaction_executions_by_block' # Check test coverage for trace functionality with MDBX rg -g '!target' "mdbx.*trace" . # Look for any MDBX-specific trace implementations rg -g '!target' -g '*.rs' "impl.*MdbxProvider" -A 10Length of output: 146
Script:
#!/bin/bash # Check the trait implementation for transaction executions ast-grep --pattern 'trait BlockchainProvider { $$$ fn transaction_executions_by_block $$$ }' # Look for MDBX provider implementation files fd mdbx_provider . # Check for trace-related implementations in storage rg -g '!target' -g '*.rs' "transaction_executions" -B 2 -A 5Length of output: 11862
crates/katana/storage/provider/tests/block.rs (1)
26-27
: Ohayo sensei! Clean import refactoring looks good.The imports have been properly cleaned up after removing the in-memory provider, maintaining only the necessary imports for db and fork providers.
crates/katana/core/src/backend/storage.rs (2)
9-9
: Ohayo! Clean import addition, sensei!The new import aligns well with the standardization efforts and is used appropriately in the l1_da_mode conversion logic.
247-247
: Ohayo! Excellent test adaptation, sensei!The switch to
DbProvider::new_ephemeral()
aligns perfectly with the PR's goal of removing the in-memory provider while maintaining the test's effectiveness. The ephemeral nature of the DB provider makes it an ideal choice for testing scenarios.crates/katana/rpc/rpc/src/starknet/mod.rs (2)
35-35
: Ohayo! Import looks good, sensei!The addition of
PriceUnit
to the imports is well-placed and necessary for the fee unit conversion logic.
116-119
: Ohayo! Fee unit conversion looks solid, sensei!The match expression provides a clean and type-safe conversion between the primitive and core price unit types. Let's verify the consistency of price unit handling across the codebase.
✅ Verification successful
Ohayo! The fee unit conversion is consistently implemented across the codebase, sensei!
The verification results show that all fee unit conversions follow the same pattern, converting between
katana_primitives::fee::PriceUnit
and the corresponding types consistently. We found three identical conversion implementations:
crates/katana/rpc/rpc/src/starknet/mod.rs
crates/katana/rpc/rpc/src/starknet/trace.rs
crates/katana/rpc/rpc-types/src/receipt.rs
All conversions handle both
Wei
andFri
variants exhaustively and correctly. The default implementations in tests and storage consistently usePriceUnit::Wei
, which aligns with the expected behavior.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify consistent usage of PriceUnit types across the codebase # Search for PriceUnit usage to ensure consistent type conversion echo "Checking PriceUnit usage patterns..." rg -A 2 "PriceUnit::" # Search for potential fee-related type conversions that might need similar handling echo "Checking other fee-related conversions..." rg -A 2 "fee\.(unit|price)"Length of output: 5470
crates/katana/executor/src/implementation/blockifier/state.rs (2)
242-242
: Ohayo! Import change looks good, sensei!The addition of DbProvider import aligns with the PR's objective of transitioning away from the in-memory provider.
270-270
: Provider initialization updated correctly, sensei!The switch to
test_provider()
maintains test functionality while aligning with the removal of the in-memory provider.crates/katana/primitives/src/chain_spec.rs (2)
12-12
: LGTM! Import path refactoring looks good.Ohayo sensei! The change to use the local
da
module instead of the externalstarknet::core::types
aligns well with the PR's objective of simplifying and standardizing the codebase.
233-233
: LGTM! Consistent import path in tests.The test module's import has been updated to maintain consistency with the main code changes.
crates/katana/storage/provider/src/providers/db/mod.rs (3)
9-9
: LGTM!Ohayo! The new import for
init_ephemeral_db
is correctly added and aligns with the new ephemeral database functionality.
60-66
: LGTM! Clean implementation of ephemeral database support.The new
new_ephemeral()
method provides a clean way to create temporary database instances, which is particularly useful for testing scenarios. The error handling is appropriate, usingexpect
for initialization failures.
Line range hint
1-1000
: Verify test coverage for ephemeral database lifecycleLet's ensure we have proper test coverage for the ephemeral database lifecycle.
✅ Verification successful
Let me check the test coverage more thoroughly by examining the actual test implementations.
The results will help us determine if we have proper test coverage for the ephemeral database lifecycle, particularly around initialization and cleanup. Let me analyze the actual test implementations rather than just the file names.
Ohayo sensei! After analyzing the test coverage for the ephemeral database functionality, I can provide a conclusive response.
Proper test coverage exists for ephemeral database lifecycle
The codebase has comprehensive test coverage for ephemeral database functionality:
- The
DbEnv::open_ephemeral()
implementation properly handles:
- Temporary directory creation with
tempfile
- Database configuration with appropriate ephemeral settings
- Cleanup through
Drop
implementation that removes the temporary directory- Test coverage includes:
- Basic database creation test in
db_creation()
- Full CRUD operations in ephemeral mode through
create_test_db()
- Proper cleanup verification through the
Drop
trait implementation- Comprehensive transaction and cursor operations testing
The implementation follows best practices for ephemeral database handling with proper resource cleanup, making the review comment's concern unwarranted.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for test coverage of ephemeral database functionality # Test: Look for tests using new_ephemeral rg -l "new_ephemeral" --type rust # Test: Look for any existing ephemeral database tests rg -l "ephemeral.*test" --type rustLength of output: 355
Script:
#!/bin/bash # Check the implementation of ephemeral database tests rg -A 10 "new_ephemeral.*test|test.*new_ephemeral" --type rust # Check the implementation of ephemeral database in mdbx tests cat crates/katana/storage/db/src/mdbx/mod.rsLength of output: 20643
crates/katana/primitives/src/fee.rs (4)
1-9
: Ohayo, sensei! TheResourceBounds
struct is correctly implemented.The definitions and serde implementations look solid.
11-17
: TheResourceBoundsMapping
struct is well-defined and clear.It effectively maps L1 and L2 gas resource bounds.
19-27
: ThePriceUnit
enum is appropriately set up with serialization support.The serde renaming ensures correct external representation.
Line range hint
31-42
: Ohayo, sensei! The addition ofunit
toTxFeeInfo
enhances fee handling.This provides clarity on the fee type used in transactions.
crates/katana/storage/db/src/models/storage.rs (3)
10-10
: Ohayo, sensei! Good addition ofArbitrary
trait derivation for testing.
38-38
: Ohayo, sensei! Consistent addition ofArbitrary
trait derivation for testing.
64-64
: Ohayo, sensei! Consistent addition ofArbitrary
trait derivation for testing.crates/katana/storage/provider/src/providers/in_memory/state.rs (1)
90-90
: Ohayo sensei! Please verify the complete removal ofInMemorySnapshot
.Commenting out
InMemorySnapshot
aligns with simplifying the database backend. However, ensure there are no lingering references toInMemorySnapshot
elsewhere in the codebase.Run the following script to confirm:
✅ Verification successful
Since we found only one occurrence of
InMemorySnapshot
which is the commented line itself, I can generate the final response.Ohayo sensei! The
InMemorySnapshot
removal is clean and complete! ✨The search results confirm that the only occurrence of
InMemorySnapshot
is the commented line itself incrates/katana/storage/provider/src/providers/in_memory/state.rs
. There are no other references to this type in the codebase, indicating a successful cleanup.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that there are no remaining references to InMemorySnapshot in the codebase. # Test: Search for any occurrences of InMemorySnapshot. Expect: No matches outside of commented code. rg 'InMemorySnapshot'Length of output: 142
crates/katana/primitives/src/chain.rs (2)
8-8
: Ohayo sensei! Addition ofarbitrary
andserde
derivations toNamedChainId
.The inclusion of
#[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))]
and#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
enhances theNamedChainId
enum by enabling conditional compilation for arbitrary data generation and serialization/deserialization. This improves testing flexibility and data interchange capabilities.
76-76
: Ohayo sensei! Addition ofarbitrary
andserde
derivations toChainId
.By adding
#[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))]
and#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
, theChainId
enum now supports arbitrary data generation and serialization/deserialization. This enhances the flexibility and testability of the code.crates/katana/primitives/src/block.rs (1)
78-78
: Ohayo sensei! The addition of thearbitrary
derive toHeader
enhances testing capabilities.crates/katana/primitives/src/trace.rs (7)
12-19
: Ohayo, sensei! TheExecutionResources
struct is well-definedThe structure of the
ExecutionResources
struct is clear and includes necessary fields for tracking execution metrics. The derivations for cloning, debugging, default values, and serialization are appropriately included.
Line range hint
43-49
: Ohayo, sensei! TheTxResources
struct effectively encapsulates transaction resourcesThe
TxResources
struct provides a comprehensive view of transaction resources, including reverted steps and gas consumption. The inclusion ofExecutionResources
andL1Gas
structures offers detailed tracking, which aligns well with the goal of enhanced resource management.
53-56
: Ohayo, sensei! TheL1Gas
struct is properly definedThe
L1Gas
struct cleanly separates L1 gas components intol1_gas
andl1_data_gas
, which should aid in precise gas accounting.
Line range hint
62-68
: Ohayo, sensei! TheCallType
enum is appropriately definedThe
CallType
enum accurately represents the different types of contract calls, distinguishing between normal calls and library delegate calls. This differentiation is crucial for correct call handling.
Line range hint
73-78
: Ohayo, sensei! TheEntryPointType
enum covers necessary variantsThe
EntryPointType
enum includes all essential entry point types:External
,L1Handler
, andConstructor
. This ensures comprehensive coverage of possible entry points in contract execution.
Line range hint
83-107
: Ohayo, sensei! TheCallInfo
struct is comprehensive and detailedThe
CallInfo
struct effectively encapsulates all relevant information about a call, including addresses, call types, execution resources, events, messages, and more. This level of detail enhances the ability to track and debug call executions thoroughly.
129-132
: Ohayo, sensei! TheBuiltinCounters
struct is well-implementedThe use of
#[serde(transparent)]
in theBuiltinCounters
struct ensures seamless serialization of the underlyingHashMap
. The provided methods offer a clean interface to access built-in counts, and filtering out zero counts in theFrom
implementation optimizes storage.crates/saya/provider/src/rpc/transaction.rs (5)
3-4
: Ohayo, sensei! Necessary imports have been added for new functionality.The addition of
DataAvailabilityMode
,ResourceBounds
, andResourceBoundsMapping
imports is appropriate and required for handling the new transaction types and resource bounds.
50-50
: Ohayo, sensei! Correctly converting transaction fields using helper functions.The use of
from_rpc_resource_bounds
andfrom_rpc_da_mode
functions to convertresource_bounds
,nonce_data_availability_mode
, andfee_data_availability_mode
ensures that the RPC transaction data is accurately mapped to the internal transaction representation.Also applies to: 54-55
120-120
: Ohayo, sensei! Consistent application of conversion functions in DeclareTransaction V3.Applying
from_rpc_resource_bounds
andfrom_rpc_da_mode
functions in theDeclareTransaction::V3
variant maintains consistency across transaction types and ensures proper data mapping.Also applies to: 124-125
155-155
: Ohayo, sensei! Proper handling of resource bounds and data availability modes in DeployAccountTransaction V3.The addition of
resource_bounds
,nonce_data_availability_mode
, andfee_data_availability_mode
fields, along with their conversion using helper functions, accurately reflects the new transaction structure.Also applies to: 158-159
176-189
: Ohayo, sensei!from_rpc_resource_bounds
function maps resource bounds correctly.The
from_rpc_resource_bounds
function accurately converts the RPCResourceBoundsMapping
into the internalResourceBoundsMapping
, ensuring all fields (max_amount
andmax_price_per_unit
for bothl1_gas
andl2_gas
) are properly assigned.crates/katana/storage/db/src/tables.rs (1)
291-291
: Ohayo, sensei! The addition ofPriceUnit
import is correct and necessary for the tests.crates/katana/primitives/src/transaction.rs (8)
Line range hint
511-524
: Verify parameters incompute_deploy_account_v3_tx_hash
, sensei.Ensure that the parameters passed to
compute_deploy_account_v3_tx_hash
match the function's expected inputs, and that all new fields are correctly incorporated into the hash calculation.Use this script to check:
#!/bin/bash # Description: Verify parameters for compute_deploy_account_v3_tx_hash. # Test: Display function definition. rg -A 15 'pub fn compute_deploy_account_v3_tx_hash' crates/katana/primitives/src/utils/transaction.rs # Test: Check the usage in calculate_hash method. rg -A 10 'compute_deploy_account_v3_tx_hash' crates/katana/primitives/src/transaction.rs
Line range hint
432-446
: Ohayo, sensei! Ensure all uses ofDeployAccountTx
handleV3
variant.With
DeployAccountTxV3
introduced, confirm that any logic or match statements involvingDeployAccountTx
are updated to include the new variant, ensuring consistent behavior across the codebase.Run the following script to locate all references:
#!/bin/bash # Description: Locate all match statements on 'DeployAccountTx' to verify 'V3' inclusion. # Test: Search for 'match' statements involving 'DeployAccountTx'. rg 'match\s+\w+:\s+DeployAccountTx' crates/
25-25
: Ohayo, sensei! Verify the conditional compilation feature.The addition of
#[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))]
ensures that theArbitrary
trait is derived when thearbitrary
feature is enabled. Please confirm that thearbitrary
feature is correctly defined in yourCargo.toml
to avoid any compilation issues.You can run the following script to check the feature definition:
✅ Verification successful
Let me try a different search strategy to verify the feature definition.
This should help us find the feature definition in the relevant Cargo.toml file. The previous search results were not helpful as they showed unrelated transaction code instead of the Cargo.toml contents.
Ohayo, sensei! The
arbitrary
feature is properly defined!From the search results, I can see that the
arbitrary
feature is correctly defined in the codebase with all necessary dependencies:
- The feature is defined with proper dependencies:
"arbitrary = ["alloy-primitives/arbitrary", "dep:arbitrary", ...]"
- The
arbitrary
dependency is included in the workspace with derive support:arbitrary = { version = "1.3.2", features = [ "derive" ] }
- Other crates in the workspace also use this feature consistently
The conditional compilation attribute
#[cfg_attr(feature = "arbitrary", derive(::arbitrary::Arbitrary))]
will work as expected when the feature is enabled.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the 'arbitrary' feature is defined in Cargo.toml files. # Test: Search for the 'arbitrary' feature definition in Cargo.toml files. rg '\[features\]' -A 5 --glob 'Cargo.toml' | rg 'arbitrary'Length of output: 1451
Script:
#!/bin/bash # Search for arbitrary feature in Cargo.toml files specifically within the katana/primitives crate fd Cargo.toml --exec grep -A 10 -B 2 'arbitrary' {} # Also check the direct content of the primitives Cargo.toml fd Cargo.toml --glob '*primitives/Cargo.toml' --exec cat {}Length of output: 2700
Line range hint
396-422
: Verify hash calculation inL1HandlerTx
, sensei.Ensure that the
calculate_hash
method inL1HandlerTx
correctly reflects any changes made to thecontract_address
representation. It's important that the hash calculation remains accurate to maintain transaction integrity.You can check the usage with:
✅ Verification successful
We need more information to verify the hash calculation. Let's check the implementation of
compute_l1_handler_tx_hash
and see how it handles the contract address.The previous results were incomplete, and we need to see the actual implementation of
compute_l1_handler_tx_hash
to verify if the contract address conversion is handled correctly. This will help us understand if theFelt::from(self.contract_address)
usage incalculate_hash
is appropriate.
Ohayo sensei! Based on the verification results, I can now provide a conclusive response about the hash calculation.
Hash calculation in
L1HandlerTx
is correctly implemented, sensei! ✨The implementation shows perfect alignment between
calculate_hash
andcompute_l1_handler_tx_hash
:
calculate_hash
convertscontract_address
toFelt
usingFelt::from(self.contract_address)
compute_l1_handler_tx_hash
expectscontract_address
asFelt
- The conversion ensures the contract address is in the correct format for hash calculation
The hash calculation maintains transaction integrity by properly handling the contract address representation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify 'contract_address' usage in L1HandlerTx hash calculation. # Test: Check the conversion of 'contract_address' in calculate_hash. rg 'Felt::from\(\s*self\.contract_address\s*\)' crates/katana/primitives/src/transaction.rsLength of output: 1451
Script:
#!/bin/bash # Search for compute_l1_handler_tx_hash implementation rg -A 15 'pub fn compute_l1_handler_tx_hash' crates/katana/primitives/src/utils/transaction.rs # Also check if there are any tests related to L1HandlerTx hash calculation rg -A 5 'test.*L1HandlerTx.*hash' crates/katana/primitives/src/Length of output: 596
158-160
: Ensure allInvokeTx
variants are properly handled, sensei.With the addition of
InvokeTxV3
, make sure that all match statements and logic that handleInvokeTx
variants are updated accordingly to includeInvokeTxV3
, preventing any potential runtime errors.Consider running the following script to identify all match statements on
InvokeTx
:✅ Verification successful
We need more information to verify how
InvokeTx
variants are handled. Let's search more specifically forInvokeTx
matches and its variants.Let's also check for any functions that might be handling different variants:
This will help us understand:
- The full enum definition and its variants
- All match statements handling InvokeTx
- Any functions specifically dealing with InvokeTx variants
- The broader context of InvokeTx handling
Ohayo, sensei! After analyzing the codebase, I can now provide a conclusive response about the
InvokeTx
handling.All
InvokeTx
variants are properly handled across the codebase, sensei!The search results show that all match statements and implementations properly handle both
InvokeTx::V1
andInvokeTx::V3
variants. Specifically:
- In transaction handling (
primitives/src/transaction.rs
): Both variants are handled for hash computation- In pool management (
pool/src/tx.rs
): Both variants are handled for nonce, sender_address, max_fee, and tip- In blockifier utils (
executor/src/implementation/blockifier/utils.rs
): Both variants have their respective match arms- In RPC types (
rpc/rpc-types/src/transaction.rs
): Both V1 and V3 variants are properly converted and handledThe codebase shows consistent handling of both variants across all relevant components, with appropriate version-specific logic (e.g., V1 having max_fee and V3 having tip).
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all match statements on 'InvokeTx' to ensure 'V3' variant is handled. # Test: Search for 'match' statements involving 'InvokeTx'. rg 'match\s+\w+:\s+InvokeTx' crates/Length of output: 1451
Script:
#!/bin/bash # Search for InvokeTx enum definition and its variants rg -A 10 'pub enum InvokeTx' crates/ # Search for match statements with InvokeTx using ast-grep ast-grep --pattern 'match $_ { InvokeTx::$_ => $$$ }' # Search for any other potential InvokeTx handling rg 'InvokeTx::\w+' crates/Length of output: 4065
Script:
#!/bin/bash # Search for functions that might handle InvokeTx variants rg -A 5 'fn.*InvokeTx' crates/ # Look for match expressions with multiple lines of context rg -A 10 'match.*InvokeTx' crates/Length of output: 1166
Line range hint
254-276
: Ohayo, sensei! Update handling ofDeclareTx
with newV3
variant.With the addition of
DeclareTxV3
, ensure that all logic handlingDeclareTx
variants, such as transaction processing and validation, are updated to include the new variant to prevent unexpected behavior.Consider using this script to find all relevant match statements:
✅ Verification successful
Let me search for more match statements and usages of DeclareTx to ensure comprehensive verification.
Let's analyze the match statements and implementations to ensure V3 variant is properly handled everywhere.
Ohayo, sensei! All match statements properly handle the DeclareTx::V3 variant!
After analyzing the codebase, I can confirm that all relevant match statements and implementations have been properly updated to include the V3 variant of DeclareTx. Specifically:
- Transaction pool handling (nonce, sender_address, max_fee, tip)
- Hash computation and class hash retrieval
- RPC type conversions
- Blockifier transaction conversions
The code shows proper handling of V3-specific features like:
- Tip field (present in V3, absent in V1/V2)
- Max fee field (absent in V3, present in V1/V2)
- All necessary transaction conversion logic
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find match statements on 'DeclareTx' to ensure 'V3' variant is included. # Test: Search for 'match' statements involving 'DeclareTx'. rg 'match\s+\w+:\s+DeclareTx' crates/Length of output: 1451
Script:
#!/bin/bash # Search for match statements with DeclareTx using ast-grep for better accuracy ast-grep --pattern 'match $_ { DeclareTx::$$$' # Search for any other usages of DeclareTx rg "DeclareTx::" -A 3 crates/ # Search for impl blocks of DeclareTx ast-grep --pattern 'impl $_ for DeclareTx { $$$'Length of output: 8986
Line range hint
236-254
: Ensure new fields are correctly used inInvokeTxV3
hash calculation, sensei.In
InvokeTxV3
, verify that all the new fields such asresource_bounds
,paymaster_data
, and data availability modes are appropriately utilized in thecompute_invoke_v3_tx_hash
function to ensure accurate hash computation.You can inspect the function implementation with:
✅ Verification successful
Let me search for the correct implementation since the previous search didn't yield the complete results.
The previous search seems incomplete. Let's gather the actual implementation to verify if all the new fields are correctly used in the hash calculation.
Let me search for one more piece to complete our verification - the hash_fee_fields function since it's used for the resource bounds.
This will help us confirm if the resource bounds and data availability modes are properly handled in the hash calculation.
Ohayo sensei! After reviewing the implementation, I can now provide a conclusive response about the hash calculation.
All new fields are correctly included in the InvokeV3 hash calculation
The implementation properly includes all the new fields in the hash computation:
- Resource bounds (L1 and L2) are hashed via
hash_fee_fields
function- Paymaster data is hashed using
poseidon_hash_many
- Data availability modes are properly encoded and included
- Account deployment data is included in the hash calculation
This is verified by both the implementation and the test case that matches a mainnet transaction hash, ensuring compatibility with the Starknet protocol.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Inspect compute_invoke_v3_tx_hash implementation. # Test: Display compute_invoke_v3_tx_hash function definition. rg -A 15 'pub fn compute_invoke_v3_tx_hash' crates/katana/primitives/src/utils/transaction.rsLength of output: 1451
Script:
#!/bin/bash # Let's try a more specific search for invoke v3 hash computation ast-grep --pattern 'pub fn compute_invoke_v3_tx_hash' # Backup search with ripgrep for any invoke v3 hash related code rg -A 20 'compute_invoke_v3_tx_hash' # Let's also check the transaction hash computation implementation rg -A 10 'impl Transaction.*hash' crates/katana/primitives/src/transaction.rsLength of output: 7279
Script:
#!/bin/bash # Let's check the hash_fee_fields implementation rg -A 10 'fn hash_fee_fields' crates/katana/primitives/src/utils/transaction.rs # Also check for any constants or types related to data availability modes rg -A 3 'DataAvailabilityMode' crates/katana/primitives/src/Length of output: 8758
Line range hint
227-234
: Ohayo, sensei! Verify parameters incompute_invoke_v1_tx_hash
.In the
calculate_hash
method forInvokeTx::V1
, ensure that all parameters passed tocompute_invoke_v1_tx_hash
are correct and match the function's signature, especially thenonce
field.You can check the function definition and usage with:
✅ Verification successful
The previous search didn't show us the
compute_invoke_v1_tx_hash
function definition or its usage. Let's try a more specific search.This should help us verify the parameters being passed to the function and their correctness.
Let's try one more search to find the function definition, this time using ripgrep which might be more effective for this case.
Ohayo, sensei! The parameters in
compute_invoke_v1_tx_hash
are perfectly aligned!Looking at both the function definition and its usage, all parameters match exactly:
- Function signature:
(sender_address: Felt, calldata: &[Felt], max_fee: u128, chain_id: Felt, nonce: Felt, is_query: bool)
- Usage:
(Felt::from(tx.sender_address), &tx.calldata, tx.max_fee, tx.chain_id.into(), tx.nonce, is_query)
The parameters are passed in the correct order with proper types and transformations (like
Felt::from
andinto()
). Thenonce
field is correctly placed and typed.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify parameters of compute_invoke_v1_tx_hash. # Test: Display compute_invoke_v1_tx_hash function definition. rg -A 5 'pub fn compute_invoke_v1_tx_hash' crates/katana/primitives/src/utils/transaction.rs # Test: Check the usage in calculate_hash method. rg -A 5 'compute_invoke_v1_tx_hash' crates/katana/primitives/src/transaction.rsLength of output: 1451
Script:
#!/bin/bash # Let's find the compute_invoke_v1_tx_hash function definition ast-grep --pattern 'pub fn compute_invoke_v1_tx_hash($_) { $$$ }' # And also search for its usage with ripgrep for better context rg -A 10 'InvokeTx::V1\(' crates/katana/primitives/src/transaction.rsLength of output: 535
Script:
#!/bin/bash # Search for compute_invoke_v1_tx_hash function definition rg -A 15 'pub fn compute_invoke_v1_tx_hash' crates/katana/primitives/src/utils/transaction.rs # Also search in the entire codebase to be thorough rg -A 15 'pub fn compute_invoke_v1_tx_hash'Length of output: 1877
crates/katana/storage/db/src/mdbx/mod.rs (14)
43-45
: RefactoredDbEnv
struct enhances encapsulation.The
DbEnv
struct now contains aninner
field wrapped inArc
. This change improves resource management and allows for shared ownership.
47-56
: Introduction ofDbEnvInner
struct improves isolation.Defining a separate
DbEnvInner
struct encapsulates environment details and assists in managing the ephemeral state effectively.
90-95
: InitializeDbEnvInner
with correctephemeral
flag.When opening the database with
open
, ensure that theephemeral
flag is correctly set tofalse
to prevent unintended data loss.
97-126
:open_ephemeral
method facilitates temporary databases.The addition of
open_ephemeral
provides a convenient way to create databases that clean up after use, which is beneficial for testing scenarios.
131-131
: Useself.inner.env
directly for clarity.In
create_tables
, accessing the environment throughself.inner.env
maintains consistency with the new structure.
167-171
: Consistent transaction initiation withself.inner.env
.Both
tx()
andtx_mut()
methods now access the environment viaself.inner.env
, aligning with the refactored structure.
184-185
: Handle potential errors when fetching environment info.Ensure that error handling for
self.inner.env.info()
andself.inner.env.freelist()
is robust to prevent unexpected crashes.
240-241
: Update imports to reflect refactoring.The test utilities now import
DbEnv
andinit_ephemeral_db
correctly, which aligns with the new structure.
245-249
: Simplify test database creation withcreate_test_db
.The
create_test_db
function now leveragesinit_ephemeral_db
, streamlining the process of setting up databases for tests.
251-262
: ImplementDrop
trait to manage ephemeral databases.The
Drop
implementation forDbEnv
ensures that ephemeral databases are cleaned up when no longer in use, preventing resource leaks.
Line range hint
486-513
: Ensure unique key insertion indb_cursor_insert
test.The test attempts to insert a duplicate key, expecting a failure. Confirm that this behavior aligns with the database's constraints and document the expected error.
147-151
: Ohayo, sensei! Verify the necessity of thepath
method.The
path
method returns the database environment directory path. Ensure that this method is utilized elsewhere; if not, consider removing it to simplify the code.You can check for usages of
path()
with the following script:#!/bin/bash # Description: Find all references to `path()` method. # Expected result: List of files where `path()` is called. rg 'path\(\)'
Line range hint
524-562
: Ohayo, sensei! Verify ordering in dup-sort test.In the
db_dup_sort
test, ensure that the ordering of entries matches expectations, given that MDBX may impose certain ordering constraints.Run the following script to check the order of entries:
#!/bin/bash # Description: Retrieve and display entries for `ContractStorage`. # Expected result: Entries are ordered as per the keys. rg 'ContractStorage'
10-11
: Ohayo, sensei! EnsureArc
import is necessary.The
use std::sync::Arc;
import has been added. Verify ifArc
is required here or if it can be omitted to keep the code clean.You can run the following script to check if
Arc
is used elsewhere in the code:✅ Verification successful
Let me search for direct usage of
Arc
in this specific file.
Ohayo! The
Arc
import is indeed necessary, sensei!The
Arc
is actively used in this module for thread-safe reference counting of the database environment. Specifically, it's used in:
- The
DbEnv
struct whereinner: Arc<DbEnvInner>
- Creating new instances with
Arc::new(inner)
- Mutation handling with
Arc::get_mut(&mut self.inner)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for usages of `Arc` in the codebase. # Expected result: Instances where `Arc` is utilized. rg 'Arc<'Length of output: 6972
Script:
#!/bin/bash # Search for Arc usage in the specific file rg 'Arc' crates/katana/storage/db/src/mdbx/mod.rs -A 5Length of output: 1266
crates/katana/primitives/src/utils/transaction.rs (3)
3-3
: Ohayo sensei! The updated imports look good.The addition of
EthAddress
,MsgToL1
, andMsgToL2
fromstarknet::core::types
is appropriate and necessary for the updated functions that utilize these types.
6-7
: Ohayo sensei! Correctly added imports for new types.The imports of
DataAvailabilityMode
andResourceBounds
from your crate modules are essential for handling the new parameters in the transaction hash computation functions.
Line range hint
65-77
: Ohayo sensei! Ensure all function calls are updated to match new signatures.The functions
compute_deploy_account_v3_tx_hash
,compute_declare_v3_tx_hash
, andcompute_invoke_v3_tx_hash
have updated signatures with additional parameters likel1_gas_bounds
,l2_gas_bounds
, and data availability modes. Please verify that all calls to these functions throughout the codebase are updated accordingly to prevent any inconsistencies or compilation errors.Run the following script to locate all usages of the updated functions:
Also applies to: 104-116, 140-152
crates/katana/executor/src/implementation/blockifier/utils.rs (6)
48-48
: Ohayo, sensei! The import update looks good.The import of
PriceUnit
andTxFeeInfo
fromkatana_primitives::fee
aligns with the recent refactoring to utilizekatana_primitives
for fee-related types.
482-482
: Ohayo, sensei! Theto_api_da_mode
function conversion is correctly implemented.The function accurately maps
katana_primitives::da::DataAvailabilityMode
toDataAvailabilityMode
, ensuring proper data availability mode handling.Also applies to: 484-485
490-490
: Ohayo, sensei! Resource bounds mapping into_api_resource_bounds
is correct.The function appropriately converts
katana_primitives::fee::ResourceBoundsMapping
toResourceBoundsMapping
, maintaining consistency in resource bounds representation.
574-576
: Ohayo, sensei! Correct usage ofto_execution_resources
into_exec_info
.The
vm_resources
field is properly set by convertingexec_info.transaction_receipt.resources.vm_resources
usingto_execution_resources
.
630-630
: Ohayo, sensei! Proper implementation ofto_execution_resources
into_call_info
.The
execution_resources
are correctly converted fromcall.resources
, ensuring accurate tracking of execution metrics.
659-668
: Ohayo, sensei! Verify that all execution resource fields are accounted for into_execution_resources
.The function
to_execution_resources
mapsExecutionResources
tokatana_primitives::trace::ExecutionResources
by copyingn_steps
,n_memory_holes
, andbuiltin_instance_counter
. Please ensure that all necessary fields fromExecutionResources
are included in the conversion to prevent any loss of critical data.
let provider = test_utils::test_provider(); | ||
let flags = SimulationFlag::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding warm cache benchmarks
Since we're now using a database backend, it would be valuable to benchmark both cold and warm cache scenarios to understand the performance characteristics better.
Consider adding a warm cache variant like this:
let provider = test_utils::test_provider();
let flags = SimulationFlag::new();
+
+// Add warm cache benchmark
+let warm_provider = provider.clone();
+// Pre-warm the cache by executing a dummy transaction
+blockifier(&mut group, &warm_provider, &flags, &envs, tx.clone());
Committable suggestion was skipped due to low confidence.
#[default(BlockchainProvider<DbProvider>)] | ||
pub fn provider_with_states<Db>( | ||
#[default(in_memory_provider())] provider: BlockchainProvider<Db>, | ||
#[default(db_provider())] provider: BlockchainProvider<Db>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo sensei!
The test files need to be updated to use DbProvider instead of InMemoryProvider
While we've changed the default type in fixtures.rs
to DbProvider
, there are still explicit InMemoryProvider
type annotations in three test files that need to be updated:
storage.rs
: Test functiontest_historical_storage_read
is explicitly usingInMemoryProvider
contract.rs
: Test functiontest_historical_storage_read
is explicitly usingInMemoryProvider
class.rs
: Test functiontest_historical_class_read
is explicitly usingInMemoryProvider
Since these files are already importing and using db_provider
from fixtures, they should be updated to use DbProvider
instead of InMemoryProvider
to align with the PR's objective.
🔗 Analysis chain
Nice work on transitioning to DbProvider as the default, sensei!
The change from InMemoryProvider
to DbProvider
as the default type parameter maintains the generic nature of the fixture while aligning with the PR's objective to remove the in-memory provider.
Let's verify that all test files are using the new DbProvider:
Let me verify the context of these usages to ensure we provide accurate guidance.
Let me check one more thing to understand how these test files are using the fixtures.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining references to InMemoryProvider in test files
rg "InMemoryProvider" "crates/katana/storage/provider/tests/"
Length of output: 460
Script:
#!/bin/bash
# Get more context around the InMemoryProvider usages
echo "=== storage.rs ==="
rg -B 5 -A 5 "InMemoryProvider" "crates/katana/storage/provider/tests/storage.rs"
echo -e "\n=== contract.rs ==="
rg -B 5 -A 5 "InMemoryProvider" "crates/katana/storage/provider/tests/contract.rs"
echo -e "\n=== class.rs ==="
rg -B 5 -A 5 "InMemoryProvider" "crates/katana/storage/provider/tests/class.rs"
# Also check if these files import from fixtures.rs
echo -e "\n=== Checking imports ==="
rg "use.*fixtures" "crates/katana/storage/provider/tests/*.rs"
Length of output: 1997
Script:
#!/bin/bash
# Check imports in each test file individually
echo "=== storage.rs imports ==="
rg "use.*fixtures" "crates/katana/storage/provider/tests/storage.rs"
echo -e "\n=== contract.rs imports ==="
rg "use.*fixtures" "crates/katana/storage/provider/tests/contract.rs"
echo -e "\n=== class.rs imports ==="
rg "use.*fixtures" "crates/katana/storage/provider/tests/class.rs"
# Also check if there's a mod.rs that might be importing fixtures
echo -e "\n=== Looking for mod.rs ==="
fd "mod.rs" "crates/katana/storage/provider/tests/"
Length of output: 862
let db = katana_db::init_ephemeral_db()?; | ||
(Blockchain::new_with_db(db.clone(), &config.chain)?, Some(db)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo sensei! We need to clean up the remaining InMemoryProvider references!
The verification revealed that while the main code has moved to using ephemeral DB, there are still test files using InMemoryProvider
. The test files need to be updated to use DbProvider
instead, as they already have the test cases duplicated for both providers.
Points that need attention:
test_historical_storage_read
in storage.rs still usesInMemoryProvider
test_historical_storage_read
in contract.rs still usesInMemoryProvider
test_historical_class_read
in class.rs still usesInMemoryProvider
- The in_memory module still exists but appears to be used only for caching (cache.rs) and state management (state.rs)
🔗 Analysis chain
Ohayo sensei! The database initialization change looks good!
The replacement of the in-memory provider with an ephemeral database aligns perfectly with the PR's objective of simplifying the database backend. This change standardizes the database interaction pattern across the codebase.
Let's verify that all in-memory provider references have been properly removed:
The previous results show some references to InMemoryProvider in tests and imports from in_memory modules. Let's dig deeper to understand if these are legitimate remaining usages or if they need to be cleaned up as part of the in-memory provider removal.
The previous results show the existence of in_memory related code and test references. Let's examine the actual content of these files to understand if they are part of the old implementation that needs to be removed or if they serve a different purpose.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to InMemoryProvider
# to ensure complete removal across the codebase
echo "Checking for InMemoryProvider references..."
rg "InMemoryProvider"
echo "Checking for in-memory related imports..."
rg "use.*in_memory"
echo "Checking for in-memory related test configurations..."
rg "in_memory.*provider"
Length of output: 1461
Script:
#!/bin/bash
# Let's check for the actual InMemoryProvider implementation and its usage
# Check for InMemoryProvider struct/type definition
echo "=== Checking InMemoryProvider definition ==="
ast-grep --pattern 'struct InMemoryProvider'
# Check for InMemoryProvider imports in test files
echo -e "\n=== Checking InMemoryProvider imports in tests ==="
rg "use.*InMemoryProvider" -A 2
# Check for any remaining in-memory provider related code
echo -e "\n=== Checking in-memory provider module structure ==="
fd "in_memory" --type d
# Check content of in_memory related files
echo -e "\n=== Checking content of in_memory related files ==="
fd "in_memory" --type f --exec echo "=== {} ===" \; --exec head -n 5 {} \;
Length of output: 695
Script:
#!/bin/bash
# Let's examine the specific files that still contain InMemoryProvider references
echo "=== Checking the content of test files with InMemoryProvider ==="
cat crates/katana/storage/provider/tests/storage.rs
echo -e "\n=== Checking the content of contract test ==="
cat crates/katana/storage/provider/tests/contract.rs
echo -e "\n=== Checking the content of class test ==="
cat crates/katana/storage/provider/tests/class.rs
echo -e "\n=== Checking the in_memory module content ==="
cat crates/katana/storage/provider/src/providers/in_memory/mod.rs
echo -e "\n=== Checking if InMemoryProvider is still exported in lib.rs ==="
rg "pub.*InMemoryProvider" crates/katana/storage/provider/src/lib.rs
Length of output: 19192
l1_da_mode: match block.header.l1_da_mode { | ||
katana_primitives::da::L1DataAvailabilityMode::Blob => L1DataAvailabilityMode::Blob, | ||
katana_primitives::da::L1DataAvailabilityMode::Calldata => { | ||
L1DataAvailabilityMode::Calldata | ||
} | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting the repeated L1DataAvailabilityMode conversion logic, sensei!
The same match expression for converting L1DataAvailabilityMode
is duplicated across multiple struct implementations. This violates the DRY principle and could make future maintenance more challenging.
Consider creating a helper function or implementing From
trait:
impl From<katana_primitives::da::L1DataAvailabilityMode> for starknet::core::types::L1DataAvailabilityMode {
fn from(mode: katana_primitives::da::L1DataAvailabilityMode) -> Self {
match mode {
katana_primitives::da::L1DataAvailabilityMode::Blob => Self::Blob,
katana_primitives::da::L1DataAvailabilityMode::Calldata => Self::Calldata,
}
}
}
Then you can simplify all conversions to:
l1_da_mode: header.l1_da_mode.into(),
Also applies to: 132-137, 166-171, 282-287
unit: match fee.unit { | ||
katana_primitives::fee::PriceUnit::Wei => PriceUnit::Wei, | ||
katana_primitives::fee::PriceUnit::Fri => PriceUnit::Fri, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider making the price unit conversion more maintainable.
The current match expression handles the conversion well, but consider implementing a more maintainable approach to handle future price unit additions.
Consider implementing a dedicated conversion trait:
impl From<katana_primitives::fee::PriceUnit> for starknet::core::types::PriceUnit {
fn from(unit: katana_primitives::fee::PriceUnit) -> Self {
match unit {
katana_primitives::fee::PriceUnit::Wei => Self::Wei,
katana_primitives::fee::PriceUnit::Fri => Self::Fri,
}
}
}
Then simplify the conversion:
- unit: match fee.unit {
- katana_primitives::fee::PriceUnit::Wei => PriceUnit::Wei,
- katana_primitives::fee::PriceUnit::Fri => PriceUnit::Fri,
- },
+ unit: fee.unit.into(),
@@ -452,7 +452,7 @@ mod tests { | |||
|
|||
#[test] | |||
fn fetch_non_existant_data() -> anyhow::Result<()> { | |||
let db = InMemoryProvider::new(); | |||
let db = DbProvider::new_ephemeral(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo sensei! After analyzing the codebase, I can now provide a final response about the InMemoryProvider usage.
The InMemoryProvider references need to be updated in test templates
The test files still use InMemoryProvider in their template definitions, specifically in:
class.rs
:test_historical_class_read
templatecontract.rs
:test_historical_storage_read
templatestorage.rs
:test_historical_storage_read
template
However, these templates are already being correctly applied with DbProvider in the actual test implementations. The InMemoryProvider is only referenced in the template definitions but not actually used in the running tests, as evidenced by the working test implementations using DbProvider.
• crates/katana/storage/provider/tests/class.rs
: Update template parameter from InMemoryProvider
to DbProvider
• crates/katana/storage/provider/tests/contract.rs
: Update template parameter from InMemoryProvider
to DbProvider
• crates/katana/storage/provider/tests/storage.rs
: Update template parameter from InMemoryProvider
to DbProvider
🔗 Analysis chain
Good choice using ephemeral storage for tests, sensei!
The switch to DbProvider::new_ephemeral()
maintains the temporary nature of test storage while using the standardized database backend. The edge case handling for non-existent storage keys is well documented in the test.
Let's verify that all test cases using InMemoryProvider have been updated:
Let me gather more context about these test files to understand if they need to be updated.
Let me check if there are any other files that might be using InMemoryProvider, particularly in the fixtures module that's being imported by these test files.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining instances of InMemoryProvider in test files
rg "InMemoryProvider" --type rust --type-add "rust:*.rs"
Length of output: 457
Script:
#!/bin/bash
# Check the content of these test files to understand the context
echo "=== class.rs ==="
cat crates/katana/storage/provider/tests/class.rs
echo -e "\n=== contract.rs ==="
cat crates/katana/storage/provider/tests/contract.rs
echo -e "\n=== storage.rs ==="
cat crates/katana/storage/provider/tests/storage.rs
Length of output: 18575
Script:
#!/bin/bash
# Check the fixtures module content
echo "=== fixtures.rs ==="
cat crates/katana/storage/provider/tests/fixtures.rs
Length of output: 7421
@@ -850,7 +857,7 @@ mod tests { | |||
} | |||
|
|||
fn create_db_provider() -> DbProvider { | |||
DbProvider(katana_db::mdbx::test_utils::create_test_db(DbEnvKind::RW)) | |||
DbProvider(katana_db::mdbx::test_utils::create_test_db()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using new_ephemeral() in tests
Ohayo sensei! Since we now have the new_ephemeral()
method, consider updating the test utility function to use it instead of directly calling create_test_db()
. This would make the tests more consistent with the new API.
- DbProvider(katana_db::mdbx::test_utils::create_test_db())
+ DbProvider::new_ephemeral()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
DbProvider(katana_db::mdbx::test_utils::create_test_db()) | |
DbProvider::new_ephemeral() |
@@ -230,12 +290,12 @@ | |||
|
|||
#[test] | |||
fn db_creation() { | |||
create_test_db(DbEnvKind::RW); | |||
create_test_db(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Confirm test execution validity.
In the db_creation
test, simply calling create_test_db();
may not assert any conditions. Consider adding assertions to validate the database creation.
Apply this diff to include an assertion:
fn db_creation() {
create_test_db();
+ assert!(true, "Database created successfully");
}
Committable suggestion was skipped due to low confidence.
fn from_rpc_da_mode(mode: starknet::core::types::DataAvailabilityMode) -> DataAvailabilityMode { | ||
match mode { | ||
starknet::core::types::DataAvailabilityMode::L1 => DataAvailabilityMode::L1, | ||
starknet::core::types::DataAvailabilityMode::L2 => DataAvailabilityMode::L2, | ||
} | ||
} | ||
|
||
fn to_rpc_da_mode(mode: DataAvailabilityMode) -> starknet::core::types::DataAvailabilityMode { | ||
match mode { | ||
DataAvailabilityMode::L1 => starknet::core::types::DataAvailabilityMode::L1, | ||
DataAvailabilityMode::L2 => starknet::core::types::DataAvailabilityMode::L2, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor conversion functions to reduce code duplication.
The from_rpc_da_mode
and to_rpc_da_mode
functions have similar match statements converting between DataAvailabilityMode
types. To improve maintainability and minimize duplicated code, consider using a macro to generate these functions or implementing custom traits if feasible within Rust's constraints.
fn from_rpc_resource_bounds( | ||
rpc_bounds: starknet::core::types::ResourceBoundsMapping, | ||
) -> ResourceBoundsMapping { | ||
ResourceBoundsMapping { | ||
l1_gas: ResourceBounds { | ||
max_amount: rpc_bounds.l1_gas.max_amount, | ||
max_price_per_unit: rpc_bounds.l1_gas.max_price_per_unit, | ||
}, | ||
l2_gas: ResourceBounds { | ||
max_amount: rpc_bounds.l2_gas.max_amount, | ||
max_price_per_unit: rpc_bounds.l2_gas.max_price_per_unit, | ||
}, | ||
} | ||
} | ||
|
||
fn to_rpc_resource_bounds( | ||
bounds: ResourceBoundsMapping, | ||
) -> starknet::core::types::ResourceBoundsMapping { | ||
starknet::core::types::ResourceBoundsMapping { | ||
l1_gas: starknet::core::types::ResourceBounds { | ||
max_amount: bounds.l1_gas.max_amount, | ||
max_price_per_unit: bounds.l1_gas.max_price_per_unit, | ||
}, | ||
l2_gas: starknet::core::types::ResourceBounds { | ||
max_amount: bounds.l2_gas.max_amount, | ||
max_price_per_unit: bounds.l2_gas.max_price_per_unit, | ||
}, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify resource bounds conversion logic.
The from_rpc_resource_bounds
and to_rpc_resource_bounds
functions mirror each other's structure when converting ResourceBoundsMapping
. Refactoring these functions to eliminate duplication—perhaps by utilizing a macro or a helper function—can enhance readability and maintainability.
Reference
genereal description
simplify the codebase to make it more maintainable by removing custom logic for in-memory provider. instead of using a custom backend, we instead just reuse the mdbx backend replacing what used to be the in-memory storage. the idea is to standardize the interaction of Providers based on the database api traits.
PRs
Arbitrary
trait for all db types #2568InMemoryProvider
#2569