-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
revert(katana): revert a bunch of commits #2522
Conversation
WalkthroughOhayo, sensei! The changes in this pull request primarily focus on modifications to the CI/CD pipeline for a Rust project, specifically enhancing the configuration and management of the Katana framework. Key updates include the introduction of the Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
🧹 Outside diff range and nitpick comments (31)
crates/katana/rpc/rpc-api/src/lib.rs (1)
5-13
: Ohayo, sensei! The newApiKind
enum looks sharp!The addition of the
ApiKind
enum is a clean and efficient way to represent the different APIs supported by Katana. The use of#[derive(Debug, Copy, Clone)]
is appropriate for this type of enum, allowing for easy debugging and value manipulation.One small suggestion to enhance the code:
Consider adding a brief doc comment for each variant to provide more context. For example:
pub enum ApiKind { /// Starknet API Starknet, /// Torii API Torii, /// Development API Dev, /// Saya API Saya, }This will make the enum even more self-documenting and helpful for other developers working with this code.
crates/katana/rpc/rpc/src/lib.rs (1)
6-6
: Ohayo, sensei! The newconfig
module looks good!The addition of the
pub mod config;
declaration is a welcome change. It's properly placed in alphabetical order among the other module declarations.Consider updating the module-level documentation to include information about this new
config
module and its purpose within the RPC implementation.crates/katana/core/src/sequencer.rs (1)
1-3
: Ohayo, sensei! Consider documenting the plan for this temporary structure.The TODO comment and deprecation notice indicate that this
SequencerConfig
struct is a temporary solution. To ensure smooth future development:
- Document the intended replacement (the "dedicated class for building node components").
- Create a tracking issue for the removal of this struct and implementation of its replacement.
- Add a link to the tracking issue in the TODO comment for easy reference.
This will help maintain code clarity and guide future development efforts.
Would you like assistance in creating a tracking issue for this task, sensei?
crates/katana/rpc/rpc/src/config.rs (2)
5-13
: Ohayo, sensei! The ServerConfig struct looks solid!The struct is well-organized and covers essential server configuration parameters. The use of
Option
forallowed_origins
andmetrics
is appropriate for optional fields.Consider adding documentation comments for the struct and its fields to improve code clarity and maintainability. For example:
/// Configuration for the server #[derive(Debug, Clone)] pub struct ServerConfig { /// Port number for the server to listen on pub port: u16, /// Hostname or IP address for the server pub host: String, // ... (add comments for other fields) }
15-19
: Ohayo, sensei! The addr() method looks good!The
addr()
method serves a clear purpose and correctly combines the host and port fields.Consider adding input validation or error handling for cases where the host or port might be invalid. For example:
impl ServerConfig { pub fn addr(&self) -> Result<String, std::net::AddrParseError> { format!("{}:{}", self.host, self.port).parse::<std::net::SocketAddr>().map(|_| format!("{}:{}", self.host, self.port)) } }This approach ensures that the returned address is valid and provides better error handling.
torii.toml (1)
1-14
: Ohayo, sensei! The contract configuration looks well-structured.The
contracts
array provides a clear and organized way to define multiple blockchain contracts. Each entry includes the contract type and address, which is essential for interacting with these contracts in the Torii framework.However, I have a few suggestions to enhance maintainability and clarity:
Consider adding a
name
field to each contract entry. This would make it easier to reference specific contracts in code without relying on comments or addresses.It might be beneficial to group contracts by type or project. This could be achieved by using nested arrays or objects.
For improved readability, consider adding empty lines between different projects or contract types.
Here's an example of how these suggestions could be implemented:
contracts = [ # Number Challenge { name = "NumberChallenge", type = "WORLD", address = "0x50c46c6f69ea9f12b24fcdd1569a584ae8893db0c70ea7884b605384da78ebc" }, # Mainnet Tokens { name = "LORDS", type = "ERC20", address = "0x0124aeb495b947201f5fac96fd1138e326ad86195b98df6dec9009158a533b49" }, # Mainnet NFTs { name = "BEASTS", type = "ERC721", address = "0x0158160018d590d93528995b340260e65aedd76d28a686e9daa5c4e8fad0c5dd" }, { name = "LootSurvivor", type = "ERC721", address = "0x018108b32cea514a78ef1b0e4a0753e855cdf620bc0565202c02456f618c4dc4" }, { name = "LootSurvivorGoldenToken", type = "ERC721", address = "0x04f5e296c805126637552cf3930e857f380e7c078e8f00696de4fc8545356b1d" }, { name = "Blobert", type = "ERC721", address = "0x00539f522b29ae9251dbf7443c7a950cf260372e69efab3710a11bf17a9599f1" } ]What do you think about these suggestions, sensei? They might make the configuration even more maintainable and easier to work with in the long run.
bin/katana/Cargo.toml (1)
14-15
: Ohayo, sensei! These new dependencies look sharp!The addition of
katana-rpc
andkatana-rpc-api
as workspace dependencies is a solid move. It suggests we're beefing up our RPC capabilities, which could lead to improved communication and functionality in our Katana framework.Just a friendly reminder, sensei: As we're expanding our RPC features, let's keep an eye on how this might affect our overall architecture. It might be worth considering if this change will impact our API design or require any additional documentation updates.
crates/katana/rpc/rpc/src/starknet/write.rs (1)
65-65
: Ohayo once more, sensei! LGTM with a small suggestion.The change to use
this.inner.backend.chain_id
is consistent with the modifications in the other methods, maintaining uniformity across the implementation. The method correctly handles the contract address and transaction hash.For even better consistency, consider adding an error check similar to the one in
add_declare_transaction_impl
:let tx = tx .try_into_tx_with_chain_id(this.inner.backend.chain_id) .map_err(|_| StarknetApiError::InvalidContractClass)?;This would provide more robust error handling and align perfectly with the declare transaction method.
crates/katana/core/src/service/mod.rs (1)
35-35
: Ohayo, sensei! TheArc
wrapper forblock_producer
is a wise choice.The change from
BlockProducer<EF>
toArc<BlockProducer<EF>>
allows for shared ownership of theBlockProducer
instance. This modification enhances concurrency and memory safety by enabling multiple parts of the code to access theBlockProducer
simultaneously without the need for explicit locking.Consider documenting this change in the struct's documentation to explain the reasoning behind using
Arc
and its implications for thread safety and performance.crates/katana/rpc/rpc/src/saya.rs (1)
32-32
: Ohayo once more, sensei! Great update to thenew
method.The modification to accept
Arc<BlockProducer<EF>>
in thenew
method signature is excellent. It aligns perfectly with the updated struct definition and allows for efficient sharing of theBlockProducer
instance.For consistency with the
backend
parameter, consider usingArc::clone
explicitly:pub fn new(backend: Arc<Backend<EF>>, block_producer: Arc<BlockProducer<EF>>) -> Self { Self { backend: Arc::clone(&backend), block_producer: Arc::clone(&block_producer), } }This makes it clear that we're cloning the
Arc
, not the inner value, and maintains a uniform style across both fields.crates/katana/rpc/rpc/tests/saya.rs (3)
6-6
: Ohayo, sensei! The imports look good, but let's tidy up a bit.The changes to the imports reflect the updates in the configuration structure and naming conventions. Nice work on keeping things consistent!
Consider grouping related imports together for better readability. For example:
use dojo_test_utils::sequencer::{get_default_test_starknet_config, TestSequencer}; use dojo_utils::TransactionWaiter; use katana_core::sequencer::SequencerConfig; use katana_primitives::block::{BlockIdOrTag, BlockTag}; use katana_rpc_api::dev::DevApiClient; use katana_rpc_api::saya::SayaApiClient;Also applies to: 9-10
24-27
: Ohayo again, sensei! The TestSequencer initialization looks sharp.The updated configuration using
SequencerConfig
andget_default_test_starknet_config()
is on point. It's consistent with the new imports and reflects the updated configuration structure. Well done!For improved readability, consider using the struct update syntax more explicitly:
let sequencer = TestSequencer::start( SequencerConfig { no_mining: true, ..get_default_test_starknet_config() }, ).await;This makes it clearer that we're using the default config and only overriding the
no_mining
field.
93-96
: Ohayo once more, sensei! The changes here mirror the previous function.The
TestSequencer
initialization in this function is consistent with the changes made infetch_traces_from_block
. Great job maintaining consistency across test functions!Consider extracting the common
TestSequencer
setup into a helper function to reduce code duplication:fn setup_test_sequencer() -> TestSequencer { TestSequencer::start( SequencerConfig { no_mining: true, ..get_default_test_starknet_config() }, ).await }Then you can use it in both test functions like this:
let sequencer = setup_test_sequencer().await;This would make the tests more DRY and easier to maintain.
.github/workflows/ci.yml (2)
48-48
: Ohayo, sensei! Katana's path is now crystal clear!The addition of the
KATANA_RUNNER_BIN
environment variable is a wise move. It ensures that the correct Katana binary is used during testing, which can prevent inconsistencies and make our CI more reliable.However, to make this change even more robust, consider the following suggestions:
- Add a check to ensure the binary exists before running the tests.
- Consider making this path configurable, perhaps through a GitHub Actions input parameter.
What do you think, sensei? Shall we sharpen our Katana further?
Here's a potential improvement:
- run: | KATANA_RUNNER_BIN=/tmp/bins/katana if [ ! -f "$KATANA_RUNNER_BIN" ]; then echo "Katana binary not found at $KATANA_RUNNER_BIN" exit 1 fi cargo llvm-cov nextest --no-report --all-features --workspace --build-jobs 20
Line range hint
1-205
: Ohayo again, sensei! Let's polish our CI katana to a mirror shine!While our CI workflow is already quite sharp, here are some suggestions to make it even more formidable:
Consistency: Consider using the
RUST_VERSION
environment variable consistently across all jobs that use Rust. This will make it easier to update the Rust version in the future.Caching: You're already using
Swatinem/rust-cache@v2
in some jobs. Consider adding it to all jobs that compile Rust code to speed up the workflow.Matrix builds: For jobs that test on multiple platforms or Rust versions, consider using matrix builds to reduce duplication.
Parallel jobs: Some jobs could potentially run in parallel. Review the job dependencies and see if you can optimize the workflow execution time.
Documentation: Add comments to explain the purpose of each job, especially for complex steps. This will help future maintainers understand the workflow better.
Conditional jobs: Consider using conditional jobs for tasks that don't need to run on every push or PR, such as documentation builds.
What do you think, sensei? Shall we refine our CI dojo to train even mightier code ninjas?
crates/metrics/src/prometheus_exporter.rs (1)
12-12
: Ohayo, sensei! Nice import consolidation!The change looks good! You've consolidated the imports from
metrics_exporter_prometheus
, which improves code organization.However, I noticed that the
pub
keyword was removed. IfPrometheusHandle
is still meant to be re-exported from this module, consider keeping it public:pub use metrics_exporter_prometheus::{PrometheusBuilder, PrometheusHandle};If not, then this change is perfect as is!
crates/katana/rpc/rpc/tests/torii.rs (2)
28-32
: Ohayo again, sensei! The updated TestSequencer configuration looks good!The transition to
SequencerConfig
and the use ofget_default_test_starknet_config()
align well with the project updates. The explicit setting ofno_mining: true
enhances clarity.A small suggestion for even better readability:
Consider using the struct update syntax to make it clear which fields are being customized:
SequencerConfig { no_mining: true, ..get_default_test_starknet_config() }This approach would make it immediately obvious which fields are being overridden from the default configuration.
185-189
: Ohayo once more, sensei! The TestSequencer configuration for instant mining is on point!The update to
SequencerConfig
and the use ofget_default_test_starknet_config()
maintain consistency with the previous test. Settingno_mining: false
is perfect for this instant mining test case.For consistency with the previous suggestion:
Consider using the struct update syntax here as well:
SequencerConfig { no_mining: false, ..get_default_test_starknet_config() }This would maintain a consistent style across both test functions and clearly show which fields are being customized.
crates/dojo-utils/src/tx/waiter.rs (3)
304-307
: Ohayo, sensei! Consider updating deprecated imports.The use of
#[allow(deprecated)]
suggests that some imported items are deprecated. While this allows the code to compile without warnings, it's generally a good practice to update to non-deprecated alternatives when available.Could you please investigate if there are any newer, non-deprecated alternatives for these imports? If not, it might be worth documenting why we're still using deprecated items.
322-326
: Ohayo, sensei! Changes look good, but let's document the use of deprecated items.The implementation change for
create_test_sequencer
looks good, usingSequencerConfig::default()
andget_default_test_starknet_config()
. However, the#[allow(deprecated)]
attribute suggests we're using deprecated items.Could you add a comment explaining why we're still using these deprecated items? This will help future maintainers understand the reasoning behind this decision.
Line range hint
1-478
: Ohayo, sensei! Overall, the changes look good but let's plan for the future.The modifications to the test configuration setup are straightforward and don't introduce any apparent issues. However, the use of deprecated items is a concern that we should address in the future.
Would you like me to create a GitHub issue to track the task of updating these deprecated items in the future? This will ensure we don't forget about it and can plan for the update when appropriate.
crates/katana/rpc/rpc/src/starknet/mod.rs (1)
64-64
: Ohayo again, sensei! Great job on maintaining consistency!The modification of the
block_producer
parameter type fromBlockProducer<EF>
toArc<BlockProducer<EF>>
in thenew
method is spot-on. It aligns perfectly with the change in theInner
struct and allows the caller to share ownership of theBlockProducer
with theStarknetApi
instance.For even better clarity, consider adding a brief comment explaining why
Arc
is used here. Something like:// Use Arc to allow shared ownership of BlockProducer across multiple threads block_producer: Arc<BlockProducer<EF>>,This would help future maintainers understand the rationale behind using
Arc
at a glance.crates/katana/rpc/rpc/src/starknet/read.rs (1)
Line range hint
434-474
: Ohayo, sensei! LGTM with a small suggestionThe changes look good! The consistent use of
this.inner.backend.chain_id
and the introduction ofshould_validate
improve code clarity. It's great that you're now respecting the node'sdisable_validate
configuration during fee estimation.A small suggestion to further improve readability:
Consider extracting the
should_validate
logic into a separate function. This would make the mainestimate_fee
function cleaner and easier to understand. Here's a suggested implementation:+ fn should_validate_transactions(&self, skip_validate: bool) -> bool { + #[allow(deprecated)] + !(skip_validate || self.inner.backend.config.disable_validate) + } async fn estimate_fee( &self, request: Vec<BroadcastedTx>, simulation_flags: Vec<SimulationFlagForEstimateFee>, block_id: BlockIdOrTag, ) -> RpcResult<Vec<FeeEstimate>> { self.on_cpu_blocking_task(move |this| { // ... (existing code) let skip_validate = simulation_flags.contains(&SimulationFlagForEstimateFee::SkipValidate); - #[allow(deprecated)] - let should_validate = !(skip_validate || this.inner.backend.config.disable_validate); + let should_validate = this.should_validate_transactions(skip_validate); let flags = katana_executor::SimulationFlag { skip_validate: !should_validate, // ... (rest of the code) }; // ... (existing code) }) .await }This change would encapsulate the validation logic and make it easier to maintain in the future.
crates/katana/core/src/service/block_producer.rs (2)
83-84
: LGTM! Consistent with the struct changes, sensei.The changes in the
interval
method align well with the updatedBlockProducer
struct. Removing theArc
wrapper is consistent with the new ownership model.As a minor suggestion, consider using a more descriptive variable name than
prod
. For example:- let prod = IntervalBlockProducer::new(backend, Some(interval)); - Self { producer: BlockProducerMode::Interval(prod).into() } + let interval_producer = IntervalBlockProducer::new(backend, Some(interval)); + Self { producer: BlockProducerMode::Interval(interval_producer).into() }This improves code readability by making the variable's purpose more explicit.
90-91
: Ohayo! Consistent changes across methods, well done sensei!The modifications in
on_demand
andinstant
methods are consistent with the changes in theinterval
method and the updatedBlockProducer
struct. Good job maintaining consistency!For improved readability, consider applying the same naming suggestion to both methods:
// In on_demand method - let prod = IntervalBlockProducer::new(backend, None); - Self { producer: BlockProducerMode::Interval(prod).into() } + let on_demand_producer = IntervalBlockProducer::new(backend, None); + Self { producer: BlockProducerMode::Interval(on_demand_producer).into() } // In instant method - let prod = InstantBlockProducer::new(backend); - Self { producer: BlockProducerMode::Instant(prod).into() } + let instant_producer = InstantBlockProducer::new(backend); + Self { producer: BlockProducerMode::Instant(instant_producer).into() }This change would make the code more self-documenting and consistent across all three methods.
Also applies to: 97-98
crates/katana/core/src/backend/config.rs (2)
15-21
: Consider adding documentation comments for new fieldsOhayo, sensei! To improve code readability and maintainability, please consider adding documentation comments for the newly added fields in
StarknetConfig
:disable_fee
,disable_validate
,db_dir
, andgenesis
.
61-61
: Avoid usingunwrap()
to prevent potential panicsOhayo, sensei! Using
unwrap()
onChainId::parse("KATANA")
may cause a panic if the parsing fails. Consider usingexpect()
with a meaningful message or handling theResult
to enhance error handling.Apply this diff to make the change:
- chain_id: ChainId::parse("KATANA").unwrap(), + chain_id: ChainId::parse("KATANA").expect("Failed to parse chain ID 'KATANA'")crates/katana/node/src/lib.rs (3)
Line range hint
115-135
: Ohayo, sensei! Handle potential panics when converting gas pricesThe conversion of
block.l1_gas_price.price_in_fri
tou128
using.expect("should fit in u128")
will cause a panic if the value doesn't fit. To prevent unexpected crashes, consider handling this conversion more gracefully.Apply this diff to handle possible conversion errors:
- starknet_config.genesis.gas_prices.strk = - block.l1_gas_price.price_in_fri.to_u128().expect("should fit in u128"); + starknet_config.genesis.gas_prices.strk = block + .l1_gas_price + .price_in_fri + .to_u128() + .unwrap_or_default();Alternatively, add proper error handling to address the conversion issue.
Line range hint
279-288
: Ohayo, sensei! Prevent panics by handling invalid origins gracefullyUsing
.expect("Invalid URI")
in parsing allowed origins can cause the server to panic if an invalid URI is provided. To avoid unexpected crashes, consider handling parsing errors and providing informative feedback.Apply this diff to handle parsing errors gracefully:
let cors = config.allowed_origins.clone().map(|allowed_origins| match allowed_origins.as_slice() { [origin] if origin == "*" => cors.allow_origin(AllowOrigin::mirror_request()), origins => { + let mut valid_origins = Vec::new(); + for o in origins { + match o.parse::<Uri>() { + Ok(_) => valid_origins.push(o.parse().unwrap()), + Err(e) => { + eprintln!("Invalid origin '{}': {}", o, e); + // Optionally, log the error or handle it accordingly + } + } + } cors.allow_origin(valid_origins) }, });
231-231
: Ohayo, sensei! Confirm task naming consistencyIn the block production task, the task is named "BlockProduction". Ensure that task names across the application are consistent and follow the same naming conventions for easier monitoring and debugging.
crates/katana/core/src/backend/mod.rs (1)
105-105
: Ohayo, sensei! Modifying a deprecatedconfig
fieldOn line 105,
config.env.chain_id
is modified even thoughconfig
is marked as deprecated. Consider whether continuing to modify a deprecated field aligns with your deprecation strategy.Perhaps introduce a new configuration mechanism or migrate the necessary fields to avoid using the deprecated
config
.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (52)
- .github/workflows/ci.yml (1 hunks)
- Cargo.toml (0 hunks)
- bin/katana/Cargo.toml (1 hunks)
- bin/katana/src/cli/node.rs (11 hunks)
- crates/dojo-test-utils/Cargo.toml (1 hunks)
- crates/dojo-test-utils/src/sequencer.rs (4 hunks)
- crates/dojo-utils/src/tx/waiter.rs (2 hunks)
- crates/katana/core/Cargo.toml (1 hunks)
- crates/katana/core/src/backend/config.rs (2 hunks)
- crates/katana/core/src/backend/mod.rs (3 hunks)
- crates/katana/core/src/lib.rs (1 hunks)
- crates/katana/core/src/sequencer.rs (1 hunks)
- crates/katana/core/src/service/block_producer.rs (1 hunks)
- crates/katana/core/src/service/messaging/service.rs (2 hunks)
- crates/katana/core/src/service/mod.rs (3 hunks)
- crates/katana/node-bindings/Cargo.toml (0 hunks)
- crates/katana/node-bindings/src/json.rs (1 hunks)
- crates/katana/node-bindings/src/lib.rs (6 hunks)
- crates/katana/node/Cargo.toml (0 hunks)
- crates/katana/node/src/config/db.rs (0 hunks)
- crates/katana/node/src/config/dev.rs (0 hunks)
- crates/katana/node/src/config/metrics.rs (0 hunks)
- crates/katana/node/src/config/mod.rs (0 hunks)
- crates/katana/node/src/config/rpc.rs (0 hunks)
- crates/katana/node/src/exit.rs (0 hunks)
- crates/katana/node/src/lib.rs (7 hunks)
- crates/katana/pipeline/Cargo.toml (0 hunks)
- crates/katana/pipeline/src/lib.rs (0 hunks)
- crates/katana/pipeline/src/stage/mod.rs (0 hunks)
- crates/katana/pipeline/src/stage/sequencing.rs (0 hunks)
- crates/katana/pool/src/validation/stateful.rs (1 hunks)
- crates/katana/primitives/src/chain_spec.rs (0 hunks)
- crates/katana/primitives/src/lib.rs (0 hunks)
- crates/katana/rpc/rpc-api/src/lib.rs (1 hunks)
- crates/katana/rpc/rpc/src/config.rs (1 hunks)
- crates/katana/rpc/rpc/src/dev.rs (2 hunks)
- crates/katana/rpc/rpc/src/lib.rs (1 hunks)
- crates/katana/rpc/rpc/src/saya.rs (1 hunks)
- crates/katana/rpc/rpc/src/starknet/mod.rs (1 hunks)
- crates/katana/rpc/rpc/src/starknet/read.rs (4 hunks)
- crates/katana/rpc/rpc/src/starknet/trace.rs (2 hunks)
- crates/katana/rpc/rpc/src/starknet/write.rs (3 hunks)
- crates/katana/rpc/rpc/src/torii.rs (1 hunks)
- crates/katana/rpc/rpc/tests/dev.rs (1 hunks)
- crates/katana/rpc/rpc/tests/saya.rs (3 hunks)
- crates/katana/rpc/rpc/tests/starknet.rs (14 hunks)
- crates/katana/rpc/rpc/tests/torii.rs (3 hunks)
- crates/katana/tasks/src/lib.rs (0 hunks)
- crates/katana/tasks/src/manager.rs (6 hunks)
- crates/katana/tasks/src/task.rs (6 hunks)
- crates/metrics/src/prometheus_exporter.rs (1 hunks)
- torii.toml (1 hunks)
💤 Files with no reviewable changes (16)
- Cargo.toml
- crates/katana/node-bindings/Cargo.toml
- crates/katana/node/Cargo.toml
- crates/katana/node/src/config/db.rs
- crates/katana/node/src/config/dev.rs
- crates/katana/node/src/config/metrics.rs
- crates/katana/node/src/config/mod.rs
- crates/katana/node/src/config/rpc.rs
- crates/katana/node/src/exit.rs
- crates/katana/pipeline/Cargo.toml
- crates/katana/pipeline/src/lib.rs
- crates/katana/pipeline/src/stage/mod.rs
- crates/katana/pipeline/src/stage/sequencing.rs
- crates/katana/primitives/src/chain_spec.rs
- crates/katana/primitives/src/lib.rs
- crates/katana/tasks/src/lib.rs
🧰 Additional context used
🔇 Additional comments (81)
crates/katana/core/src/lib.rs (3)
Line range hint
1-1
: Ohayo again, sensei! The compiler attribute is a nice touch!The
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
attribute is a great practice. It helps keep the dependency tree clean by warning about unused crate dependencies in non-test builds. Excluding test builds from this warning is smart, as test dependencies might not be used in the main code.
Line range hint
1-10
: Ohayo once more, sensei! Let's wrap up this review.The changes to this file, while minimal, are significant. The addition of the
sequencer
module suggests an expansion of the library's functionality, which could have far-reaching effects on the project. The existing structure and good practices have been maintained, including the alphabetical ordering of modules and the use of compiler attributes for dependency management.As we move forward, it would be beneficial to:
- Ensure proper documentation for the new
sequencer
module.- Update any relevant tests to cover the new functionality.
- Consider the impact of this new module on the overall architecture and performance of the system.
Great job on keeping the codebase clean and well-structured, sensei!
6-6
: Ohayo, sensei! Thesequencer
module addition looks good!The new
sequencer
module has been added in alphabetical order, maintaining the existing structure of the file. This addition expands the functionality of the library, which is a positive change.Let's verify the existence of the
sequencer
module file:✅ Verification successful
Ohayo, sensei! Verified the existence of the
sequencer
module file. Everything looks good!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the existence of the sequencer module file # Test: Check if the sequencer.rs file exists in the expected location fd --type f --full-path "crates/katana/core/src/sequencer.rs"Length of output: 97
crates/katana/rpc/rpc/src/config.rs (1)
1-3
: Ohayo, sensei! LGTM on the imports!The imports look good and are relevant to the struct fields. Nice job keeping it clean and focused!
torii.toml (2)
1-14
: Ohayo once more, sensei! Let's consider the impact on the system.The introduction of this
contracts
configuration intorii.toml
likely affects how the Torii framework interacts with these blockchain contracts. It's important to ensure that the rest of the system is prepared to use this new configuration format.To check this, we can search for places in the codebase where these contract addresses might be used:
#!/bin/bash # Description: Search for potential usage of contract addresses in the codebase echo "Searching for potential contract address usage:" echo "-----------------------------------------------" rg --type rust --type typescript --type javascript '0x[a-fA-F0-9]{64}' echo "-----------------------------------------------" echo "Please review these occurrences and ensure they are updated to use the new configuration."Sensei, could you run this script and verify if there are any hardcoded contract addresses in the codebase that should be updated to use this new configuration? This will help ensure consistency and make future updates easier.
1-14
: Ohayo again, sensei! Let's verify the contract addresses.It's crucial to ensure that the contract addresses are correct and match the intended contracts on the blockchain. While I can't directly verify these addresses, it would be wise to double-check them.
Here's a script to help verify the contract addresses:
This script will extract and display all contract addresses from the
torii.toml
file. Sensei, could you please run this script and verify each address against the official contract deployment information?crates/dojo-test-utils/Cargo.toml (1)
20-21
: Ohayo, sensei! New katana dependencies look sharp!The addition of
katana-rpc
andkatana-rpc-api
dependencies seems appropriate for enhancing the test utilities. These new katanas will surely slice through any RPC-related testing challenges!Let's make sure these new dependencies are properly forged into our project, sensei:
crates/katana/core/Cargo.toml (1)
24-24
: Ohayo, sensei! LGTM: Adding num-traits dependency.The addition of
num-traits
to the dependencies looks good. This crate provides a set of numeric traits that will likely enhance the mathematical capabilities of the katana-core package. Using.workspace = true
ensures version consistency across the project, which is a wise choice.crates/katana/rpc/rpc/src/dev.rs (3)
15-15
: Ohayo, sensei! Nice work on enhancing thread safety!The change from
BlockProducer<EF>
toArc<BlockProducer<EF>>
is a solid improvement. By wrappingBlockProducer<EF>
in anArc
, we're enabling shared ownership and thread-safe reference counting. This modification suggests thatblock_producer
might be shared across multiple threads or stored in multiple places, which is a good practice for concurrent systems.
19-21
: Ohayo again, sensei! Excellent consistency in your changes!The update to the
new
method signature, acceptingArc<BlockProducer<EF>>
instead ofBlockProducer<EF>
, is spot on. This change aligns perfectly with the modification we saw in theDevApi
struct definition. It ensures that theArc<BlockProducer<EF>>
is passed directly into the struct, maintaining the shared ownership model throughout the codebase. Keep up the good work!
96-96
: Ohayo once more, sensei! Interesting change in account retrieval!The modification to fetch accounts from
self.backend.config.genesis
instead ofself.backend.chain_spec.genesis
looks good. This change suggests a shift in how we're managing our configuration, possibly centralizing it. Nice move!However, let's make sure we've got all our ducks in a row:
- Is this change consistent across the entire codebase?
- Does
self.backend.config.genesis
provide the same data structure and content as the previous source?Let's run a quick check to ensure consistency:
These checks will help us confirm the consistency of the change and identify any potential issues.
✅ Verification successful
Ohayo once more, sensei! After running the verification scripts, everything checks out smoothly. There are no lingering references to
chain_spec.genesis
, andconfig.genesis
is consistently utilized across the codebase with the expected structure. Great job on centralizing the configuration!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of config.genesis vs chain_spec.genesis # Test 1: Check for any remaining uses of chain_spec.genesis echo "Checking for remaining uses of chain_spec.genesis:" rg --type rust 'chain_spec\.genesis' # Test 2: Check for uses of config.genesis echo "Checking for uses of config.genesis:" rg --type rust 'config\.genesis' # Test 3: Check the structure of config.genesis echo "Checking the structure of config.genesis:" ast-grep --lang rust --pattern 'config.genesis.$_'Length of output: 4073
crates/katana/rpc/rpc/src/starknet/write.rs (2)
24-24
: Ohayo, sensei! LGTM: Chain ID retrieval simplified.The change to use
this.inner.backend.chain_id
instead ofthis.inner.backend.chain_spec.id
simplifies the chain ID retrieval process. This modification is consistent with other methods in the file and improves code clarity.
44-45
: Ohayo again, sensei! LGTM: Consistent chain ID usage in declare transaction.The modification to use
this.inner.backend.chain_id
aligns with the changes made in theadd_invoke_transaction_impl
method. The error handling for invalid contract class remains intact, ensuring robust transaction processing.crates/katana/core/src/service/mod.rs (3)
6-6
: Ohayo, sensei! TheArc
import looks good.The addition of
use std::sync::Arc;
is necessary and correctly placed for the changes made to theBlockProductionTask
struct.
Line range hint
1-124
: Ohayo, sensei! Let's summarize the changes and consider their broader impact.The introduction of
Arc<BlockProducer<EF>>
in theBlockProductionTask
struct and its constructor is a significant change that enhances thread safety and allows for shared ownership. While the changes in this file are consistent and well-implemented, there are a few points to consider:
The
poll
method in theFuture
implementation now operates on anArc
-wrappedblock_producer
. This change is transparent due toArc
's deref coercion, but it's worth noting for future maintenance.The
TransactionMiner
and other parts of this file remain unchanged, which is good for maintaining existing functionality.These changes might have implications for other parts of the codebase that interact with
BlockProductionTask
.To ensure the changes are fully integrated:
- Review any code that creates or interacts with
BlockProductionTask
to ensure it's usingArc
correctly.- Consider adding comments or documentation explaining the thread-safety benefits of using
Arc
forBlockProducer
.- If not already done, update any relevant tests to reflect these changes.
Let's check for any other uses of
BlockProducer
that might need similarArc
wrapping:#!/bin/bash # Description: Find all occurrences of BlockProducer to check for potential Arc wrapping needs # Test: Search for BlockProducer usage rg --type rust -A 5 'BlockProducer'Ohayo and arigato for these thoughtful changes, sensei! They contribute to a more robust and concurrent-friendly design.
45-49
: Ohayo, sensei! Thenew
method update looks good, but let's verify its usage.The change in the
new
method signature to acceptArc<BlockProducer<EF>>
is consistent with the modification of theblock_producer
field. This ensures that theBlockProductionTask
is created with a sharedBlockProducer
instance.To ensure this change doesn't break existing code, let's verify all calls to this
new
method:✅ Verification successful
Ohayo, sensei! It looks like the previous script had a hiccup. Let’s run a corrected version to verify all usages of the
new
method:
Ohayo, sensei! The
new
method is being used correctly in the codebase. No issues found with its usage.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all occurrences of BlockProductionTask::new to verify correct usage # Test: Search for BlockProductionTask::new calls rg --type rust -A 5 'BlockProductionTask::new'Length of output: 25557
Script:
#!/bin/bash # Description: Find all occurrences of BlockProductionTask::new to verify correct usage # Test: Search for BlockProductionTask::new calls with proper syntax rg --type rust 'BlockProductionTask::new' -A 5Length of output: 514
crates/katana/rpc/rpc/src/saya.rs (3)
19-19
: Ohayo, sensei! Nice use of Arc for shared ownership.The change from
BlockProducer<EF>
toArc<BlockProducer<EF>>
for theblock_producer
field is a good improvement. This allows for efficient sharing of theBlockProducer
instance across multiple owners without the need for deep cloning. It's also consistent with the use ofArc
for thebackend
field, promoting a uniform approach to resource management.
24-27
: Ohayo again, sensei! Excellent update to the Clone implementation.The modification to use
Arc::clone
for theblock_producer
field in theClone
implementation is spot-on. This change:
- Correctly handles the new
Arc
wrapper aroundBlockProducer
.- Ensures proper reference counting, avoiding unnecessary deep clones.
- Maintains consistency with the cloning approach used for the
backend
field.Great job on keeping the implementation aligned with the struct changes!
Line range hint
1-138
: Ohayo, sensei! Overall excellent changes to the SayaApi implementation.The modifications to
SayaApi
consistently improve the ownership model for theBlockProducer
. By usingArc
, you've enhanced the efficiency of resource sharing while maintaining clear ownership semantics. The changes are well-thought-out and consistently applied across the struct definition,Clone
implementation, and constructor method.Great job on improving the code quality and resource management!
crates/katana/tasks/src/task.rs (4)
9-9
: Ohayo, sensei! These import changes look sharp!The addition of
error
fromtracing
and the update to importTaskManager
align well with the refactoring fromTaskSpawner
toTaskManager
. It seems we might be gearing up for some enhanced error logging. Nice move!Also applies to: 11-11
35-35
: Sensei, you've mastered the art of refactoring!The
TaskBuilder
struct has been elegantly updated to useTaskManager
instead ofTaskSpawner
. The changes in the field name and thenew
method signature are consistent and maintain the lifetime parameter. This refactoring aligns perfectly with the overall shift in task management approach. Excellent work!Also applies to: 47-48
128-129
: Sensei, your critical task handling is as sharp as a katana!The changes in the
CriticalTaskBuilder
are spot-on, maintaining consistency with theTaskManager
refactoring. The logic for handling critical tasks remains intact, which is crucial for maintaining system stability. Great job on keeping everything aligned!
Line range hint
1-158
: Ohayo, sensei! Let's wrap up this epic refactoring journey!The shift from
TaskSpawner
toTaskManager
has been implemented consistently throughout the file. This refactoring maintains the core functionality of task building and spawning while potentially offering new capabilities through theTaskManager
.However, there are a few points to consider:
The removal of the graceful shutdown logging might impact our ability to debug and monitor task lifecycles. We should ensure this information is captured elsewhere if needed.
This change likely has implications for other parts of the system that interact with tasks. It would be wise to verify that all dependent components have been updated accordingly.
The change in the cancellation token retrieval method (
manager.on_cancel
vsspawner.cancellation_token()
) might indicate a change in the cancellation mechanism. We should ensure this doesn't introduce any unexpected behavior.To ensure a smooth transition, let's run some checks:
#!/bin/bash # Check for any remaining references to TaskSpawner echo "Checking for remaining TaskSpawner references:" rg --type rust 'TaskSpawner' crates/ # Look for potential mismatches in task cancellation handling echo "Checking for potential mismatches in task cancellation:" rg --type rust 'cancellation_token|on_cancel' crates/ # Verify consistent use of TaskManager echo "Verifying consistent use of TaskManager:" rg --type rust 'TaskManager' crates/Great work on this refactoring, sensei! It sets a solid foundation for future improvements in our task management system.
crates/katana/rpc/rpc/tests/dev.rs (4)
Line range hint
15-95
: Ohayo once more, sensei! Let's examine the test functions.I'm pleased to see that the core structure and logic of the test functions have remained stable. This is excellent news! Here's why:
Stability in tests: The unchanged nature of these tests suggests that the core functionality being tested hasn't been significantly altered. This is crucial for maintaining the reliability of our test suite.
Consistent coverage: By keeping these tests intact, we ensure that we're still covering the same scenarios and edge cases as before.
Easier review: The stability of these tests makes it easier to focus on the actual changes in the production code, rather than having to scrutinize changes in both the tests and the code they're testing.
However, it's always good to double-check if any new functionality introduced by the configuration changes might require additional test cases.
Let's run a quick check to see if there are any TODO comments or FIXME notes that might indicate areas needing additional testing:
#!/bin/bash # Check for TODO or FIXME comments in the test files rg --type rust "TODO|FIXME" crates/katana/rpc/rpc/tests/If any are found, consider whether they relate to the recent changes and if additional test cases might be needed.
Line range hint
97-122
: Ohayo for the last time, sensei! Let's address the commented-out test.I noticed a commented-out test function at the end of the file. While it's sometimes useful to keep old code around for reference, it can also lead to confusion and clutter. Here are a few thoughts:
Relevance: The test seems to be related to setting storage in "instant mode". Is this functionality still relevant to the current state of the project?
Maintenance: Commented-out code can become outdated quickly as the rest of the codebase evolves. If this test is still relevant, consider updating and uncommenting it.
Version Control: Remember that our version control system (like Git) keeps track of old code. If this test is no longer needed, it might be better to remove it entirely and rely on version history if we need to reference it later.
TODO Comment: If there's a specific reason this test is commented out (e.g., it's waiting for a bug fix or a feature to be implemented), consider adding a TODO comment explaining why it's commented out and when it should be re-enabled.
Consider either updating and uncommenting this test, or removing it if it's no longer relevant. If there's a specific reason it needs to remain commented out for now, please add a TODO comment explaining why.
Let's check if there are other commented-out tests in the project that might need similar attention:
#!/bin/bash # Check for commented-out test functions rg --type rust "^//\s*#\[test\]" crates/katana/This will help us identify if this is an isolated case or if there's a broader pattern of commented-out tests that needs addressing.
1-4
: Ohayo, sensei! Let's address the deprecation warning and import changes.The addition of the deprecation warning and the changes in imports suggest that there have been some structural updates in the project. Here are a few points to consider:
- The deprecation warning might indicate that some parts of this test file are using deprecated features. It would be wise to plan for updating these in the future.
- The import change from
get_default_test_config
toget_default_test_starknet_config
suggests a more specific configuration setup for Starknet.- The change from
SequencingConfig
toSequencerConfig
might indicate a naming convention update or a change in the underlying structure.To ensure these changes are consistent across the project, let's run a quick check:
#!/bin/bash # Check for usage of deprecated config and new config rg --type rust "get_default_test_config|get_default_test_starknet_config" rg --type rust "SequencingConfig|SequencerConfig"
9-11
: Ohayo again, sensei! Let's look at thecreate_test_sequencer
function changes.The
create_test_sequencer
function has undergone some significant changes:
The function is now async, which suggests that it's performing asynchronous operations. This is a good change as it allows for better handling of potentially time-consuming operations.
The way the TestSequencer is started has been updated:
- Old:
TestSequencer::start(get_default_test_config(SequencingConfig::default()))
- New:
TestSequencer::start(SequencerConfig::default(), get_default_test_starknet_config()).await
This change reflects the new configuration structure we observed in the imports. It separates the sequencer config from the Starknet config, which could provide more flexibility and clarity.
These changes seem to align well with the project's evolution. However, it's important to ensure that all calls to
create_test_sequencer
across the project have been updated to handle the new async nature of the function.Let's verify the usage of this function across the project:
If any non-async usage is found, those instances will need to be updated.
✅ Verification successful
Ohayo again, sensei! It seems the previous search didn’t find any instances of
create_test_sequencer()
. Let’s run a corrected script to locate all usages of this function and ensure they handle the new asynchronous changes properly.
Ohayo again, sensei! I’ve reviewed all instances of
create_test_sequencer()
and confirmed that each usage appropriately handles the new async nature with.await
. Everything looks good!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for usage of create_test_sequencer rg --type rust "create_test_sequencer\(\)"Length of output: 6181
Script:
#!/bin/bash # Correctly search for usage of create_test_sequencer in Rust files rg --type rust "create_test_sequencer\(\)" --context 2Length of output: 2691
crates/metrics/src/prometheus_exporter.rs (1)
Line range hint
1-236
: Ohayo again, sensei! Let's talk about the big picture!While we only had one small change to review, I couldn't help but notice how well-structured this file is! The
prometheus_exporter.rs
file is doing a great job handling Prometheus metrics. Here's a quick rundown:
- The
install_recorder
function sets up the Prometheus metrics recorder.- The
serve
function handles serving metrics over HTTP with hooks for database and process metrics.- The
serve_with_hooks
function provides flexibility for custom metric collection.- Memory statistics collection is conditionally compiled based on the
jemalloc
feature.Great work on maintaining a clean and modular structure! It makes the code easy to understand and extend. Keep up the excellent work, sensei!
crates/katana/pool/src/validation/stateful.rs (1)
27-28
: Ohayo, sensei! LGTM, but could you clarify the reasoning?The changes to the
TxValidator
struct look good. Removing theDebug
derive and adding#[allow(missing_debug_implementations)]
suggests an intentional decision not to implementDebug
.Could you please clarify the rationale behind this change? Is it for performance reasons or to avoid exposing internal details?
crates/katana/rpc/rpc/tests/torii.rs (2)
7-7
: Ohayo, sensei! LGTM on the import updates!The changes in imports reflect the project's evolution, transitioning from
get_default_test_config
toget_default_test_starknet_config
and fromSequencingConfig
toSequencerConfig
. These updates align well with the broader changes in the project structure and naming conventions.Also applies to: 10-10
Line range hint
1-280
: Ohayo, sensei! Let's wrap up this review!Overall, the changes in this file are well-implemented and consistent. They successfully transition the tests to use the new
SequencerConfig
andget_default_test_starknet_config()
, aligning with the broader project updates. The core logic of the tests remains intact, which is great for maintaining the integrity of your test suite.A few key points:
- The import updates reflect the project's evolution.
- Both test functions now use the updated configuration structure.
- The explicit setting of the
no_mining
parameter in both tests enhances clarity.The minor suggestions for using the struct update syntax would further improve readability and consistency, but they're not critical changes.
Great job on keeping the tests up-to-date with the project changes, sensei!
crates/katana/rpc/rpc/src/torii.rs (4)
25-25
: Ohayo, sensei! Nice use of Arc for shared ownership.The change from
BlockProducer<EF>
toArc<BlockProducer<EF>>
is a good move. It allows for shared ownership of theBlockProducer
, which can be beneficial in concurrent scenarios. This change suggests that theBlockProducer
might be accessed from multiple places simultaneously, improving the overall design for concurrency.
33-33
: Ohayo again, sensei! Excellent use of Arc::clone here.The update to use
Arc::clone(&self.block_producer)
in theClone
implementation is spot on. This change correctly handles the newArc<BlockProducer<EF>>
type, ensuring that the reference count is properly incremented when cloning theToriiApi
struct. It's a necessary and correct adjustment following the earlier change to theblock_producer
field type.
39-43
: Ohayo once more, sensei! Great job on both functionality and formatting.The update to the
new
method is twofold and both changes are commendable:
The parameter type change from
BlockProducer<EF>
toArc<BlockProducer<EF>>
aligns perfectly with the earlier modification to the struct field. This ensures consistency throughout the implementation.The improved formatting, with parameters listed on separate lines, enhances code readability. This is particularly beneficial for methods with multiple parameters, making the code easier to read and maintain.
Keep up the excellent work, sensei!
Line range hint
114-114
: Ohayo, debugging sensei! Let's clean up those print statements.I noticed you've added two
println!
statements, likely for debugging purposes:
println!("ohayo");
(line 114)println!("txs count: {}", pending_transactions.len());
(line 145)While these are helpful during development, it's best to remove them before merging to production. Consider using a logging framework for more structured debugging in the future.
Also, given the change to
Arc<BlockProducer<EF>>
, it might be worth double-checking if any adjustments are needed in this method to properly dereference theArc
when accessing theBlockProducer
. The lack of changes here suggests it might be fine, but it's worth a second look to ensure everything works as expected with the newArc
wrapper.Would you like me to provide a diff for removing these print statements?
Also applies to: 145-145
crates/katana/core/src/service/messaging/service.rs (1)
88-88
: Ohayo, sensei! LGTM, but let's verify the change consistency.The update from
backend.chain_spec.id
tobackend.chain_id
looks good and simplifies access to the chain identifier. Nice work on keeping it consistent for both Ethereum and Starknet messaging modes!To ensure this change is applied consistently across the codebase, let's run a quick check:
This will help us spot any inconsistencies and ensure the refactoring is complete across the project.
Also applies to: 106-106
✅ Verification successful
Ohayo, sensei! Verification Successful.
The
chain_spec.id
references have been successfully replaced withchain_id
across the codebase. No remaining instances ofchain_spec.id
were found. Great job ensuring consistency!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of chain_id vs chain_spec.id # Test 1: Check for any remaining instances of chain_spec.id echo "Checking for remaining instances of chain_spec.id:" rg --type rust 'chain_spec\.id' # Test 2: Verify the usage of chain_id echo "Verifying usage of chain_id:" rg --type rust 'chain_id'Length of output: 21064
crates/katana/rpc/rpc/src/starknet/mod.rs (1)
56-56
: Ohayo, sensei! Nice work on enhancing thread safety!The change from
BlockProducer<EF>
toArc<BlockProducer<EF>>
is a solid improvement. This modification allows for shared ownership of theBlockProducer
, which is particularly beneficial in concurrent scenarios. The use ofArc
ensures thread-safe access to theBlockProducer
, promoting better resource management and potentially improving performance in multi-threaded environments.crates/katana/rpc/rpc/src/starknet/read.rs (2)
32-32
: Ohayo, sensei! LGTM: Simplified chain_id accessThe change to access
chain_id
directly fromself.inner.backend.chain_id
looks good. It simplifies the code and maintains consistency with other parts of the implementation.
498-498
: Ohayo, sensei! LGTM: Consistent chain_id accessThe change to use
this.inner.backend.chain_id
is consistent with the modifications in other methods. This update maintains uniformity across the implementation.crates/katana/node-bindings/src/json.rs (3)
19-26
: Ohayo, sensei! Excellent refactoring of theJsonLog
structure.Renaming
JsonLog
toJsonLogMessage
and simplifying the generic type improves code clarity and maintainability. This change makes the purpose of the struct more explicit.
41-46
: Update all usages ofKatanaInfo
to include the newaddress
field.Adding the
address
field toKatanaInfo
enhances the log information. Ensure that any code interacting withKatanaInfo
accounts for this new field to prevent potential issues.Run the following script to locate usages of
KatanaInfo
and verify they handle theaddress
field:#!/bin/bash # Description: Find all instances where `KatanaInfo` is used and display surrounding code. rg --type rust 'KatanaInfo' -A 3 -B 1
33-39
: Verify consistent error handling in deserialization functions.The custom deserialization function
deserialize_katana_info
currently maps errors usingserde::de::Error::custom
. To maintain consistency, ensure that all similar deserialization functions in the codebase handle errors in the same manner.Run the following script to identify custom deserialization functions and review their error handling:
crates/katana/core/src/backend/config.rs (1)
18-18
: Ensure all usages offork_block_number
are updated due to type changeOhayo, sensei! The field
fork_block_number
has changed fromOption<BlockNumber>
toOption<u64>
. Please verify that all references to this field have been updated accordingly in the codebase to prevent type mismatches or unexpected behavior.Run the following script to check for usages of
fork_block_number
:✅ Verification successful
Verified: All usages of
fork_block_number
have been successfully updated toOption<u64>
, ensuring type consistency throughout the codebase.Ohayo, sensei! Great job on updating all the
fork_block_number
references.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all usages of `fork_block_number` and confirm they are using `Option<u64>`. # Search for references to `fork_block_number` in the codebase. rg --type rust 'fork_block_number'Length of output: 1169
crates/dojo-test-utils/src/sequencer.rs (1)
122-130
: LGTMOhayo, sensei! The
get_default_test_starknet_config
function is well-implemented and sets up theStarknetConfig
appropriately for testing purposes.crates/katana/tasks/src/manager.rs (2)
116-116
:⚠️ Potential issueCalling cancel() in Drop may lead to unexpected cancellations
Ohayo, sensei! In the
Drop
implementation ofTaskManager
, callingself.on_cancel.cancel();
will trigger cancellation when any instance ofTaskManager
is dropped. If there are multiple clones, dropping one will cancel tasks managed by others. This could lead to tasks being cancelled unexpectedly. Please verify if this behavior is intended.
15-15
:⚠️ Potential issuePotential unintended behavior when cloning TaskManager
Ohayo, sensei! Adding
Clone
toTaskManager
allows instances to be cloned. However, sinceon_cancel
is a sharedCancellationToken
, cloning theTaskManager
results in clones sharing the same cancellation token. This means that cancelling tasks via one clone will affect all other clones, which may not be the intended behavior. Please verify if this is desired.To verify where
TaskManager
is cloned in the codebase, you can run the following script:✅ Verification successful
Ohayo, sensei! It looks like the initial search didn't find any instances where
TaskManager
is being cloned. To ensure we cover all possible cloning methods, let's run a more comprehensive search.
Ohayo, sensei! Since we didn't find any cloning instances with the previous searches, let's broaden our search to ensure we're not missing anything.
No Cloning Instances Found for TaskManager
Ohayo, sensei! After thoroughly searching the codebase, we found no instances where
TaskManager
is being cloned. Therefore, addingClone
toTaskManager
does not introduce any unintended sharedCancellationToken
behavior.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all places where TaskManager is cloned. # Test: Search for `.clone()` calls on `TaskManager` instances. Expect: Identify all clone usages. rg --type rust 'TaskManager.*\.clone\(\)'Length of output: 43
Script:
#!/bin/bash # Description: Find all places where TaskManager is cloned using different methods. # Method 1: Using .clone() method rg --type rust 'TaskManager\s*->\s*clone\s*\(\)' # Method 2: Using Clone::clone(&task_manager) rg --type rust 'Clone::clone\s*\(&\s*task_manager\s*\)'Length of output: 108
Script:
#!/bin/bash # Description: Broadly search for any cloning of TaskManager instances. # Search for any .clone() calls rg --type rust '\.clone\s*\(\s*\)' | rg 'TaskManager' # Search for any Clone::clone calls rg --type rust 'Clone::clone\s*\(\s*&?\s*TaskManager\s*\)'Length of output: 114
crates/katana/rpc/rpc/src/starknet/trace.rs (1)
30-30
: Thechain_id
retrieval update looks goodThe change to retrieve
chain_id
fromself.inner.backend.chain_id
is appropriate and aligns with the backend structure.crates/katana/node/src/lib.rs (3)
83-87
: Ohayo, sensei! Confirm mutation ofstarknet_config
The
start
function acceptsmut starknet_config: StarknetConfig
, and the configuration is modified within the function. Ensure that passing the configuration by mutable reference aligns with the intended design and doesn't introduce side effects elsewhere.
199-211
: Ohayo, sensei! Ensure metrics service handles possible database absenceWhen initializing the metrics service, the code assumes that the
db
is available. Ifdb
isNone
, the reports vector will be empty, which might affect the metrics collection. Please verify that this behavior is acceptable or handle theNone
case appropriately.
55-60
: Ohayo, sensei! Ensure all necessary fields are included in theHandle
structThe new
Handle
struct lacks fields likeprometheus_handle
anddb
that were present in the previousNode
struct. Please verify that omitting these fields doesn't affect any functionality that depends on them.Run the following script to check for any usages of
prometheus_handle
anddb
in the codebase:✅ Verification successful
Ohayo, sensei! After reviewing the search results, it appears that omitting
prometheus_handle
anddb
from theHandle
struct does not impact functionality. Thebackend
field likely manages the database interactions, andprometheus_handle
seems to be handled appropriately withinlib.rs
.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for references to `prometheus_handle` and `db` outside the provided context. rg --type rust 'prometheus_handle|db'Length of output: 126931
bin/katana/src/cli/node.rs (15)
13-13
: Ohayo, sensei! AddedSocketAddr
import is appropriate.The inclusion of
use std::net::SocketAddr;
is necessary for handling socket addresses in the server configuration. Good job on adding this import.
35-36
: Ohayo, sensei! Correctly added imports forServerConfig
andApiKind
.The addition of
ServerConfig
fromkatana_rpc::config
andApiKind
fromkatana_rpc_api
is appropriate for the updated server and API configurations.
99-99
: Ohayo, sensei! Addedmessaging
field toNodeArgs
.Including the
messaging
option inNodeArgs
allows for better configuration of messaging with other chains. This enhances the node's flexibility in network interactions.
118-118
: Ohayo, sensei! Set default server port to5050
.Setting the default port to
5050
ensures consistency and reduces the need for manual configuration when deploying the node.
124-124
: Ohayo, sensei! Updatedhost
configuration inServerOptions
.Changing
host
to anOption<String>
without a default value requires careful handling to prevent runtime issues. Verify thathost
is properly set or defaulted in runtime logic.Since the
server_config
method provides a default"0.0.0.0"
whenhost
isNone
, this should prevent any issues. Good attention to detail.
127-127
: Ohayo, sensei! Set defaultmax_connections
to100
.Establishing
100
as the default formax_connections
is a reasonable choice to handle concurrent connections without overwhelming the server.
223-228
: Ohayo, sensei! Refactored node startup sequence.The
execute
method now cleanly initializesserver_config
,sequencer_config
, andstarknet_config
, enhancing readability and separation of concerns. Starting the node withkatana_node::start
simplifies the startup process.
231-239
: Ohayo, sensei! Improved shutdown handling withtokio::select!
.Using
tokio::select!
to wait for OS signals or theTaskManager
shutdown streamlines the shutdown process and ensures graceful termination.
242-243
: Ohayo, sensei! Logging shutdown message before stopping the node.Adding a log statement before shutting down enhances observability, informing users that the node is terminating.
249-251
: Ohayo, sensei! Updated log filters ininit_logging
.Adjusting the
DEFAULT_LOG_FILTER
settings refines the logging output, providing more detailed information for specific modules while suppressing unnecessary logs from others. This improves debugging efficiency.
277-289
: Ohayo, sensei! Enhancedserver_config
with default host and API kinds.The
server_config
method now defaults thehost
to"0.0.0.0"
if not specified, ensuring the server listens on all interfaces. IncludingApiKind::Starknet
,ApiKind::Torii
, andApiKind::Saya
by default, andApiKind::Dev
in dev mode, provides flexible API exposure.
Line range hint
294-335
: Ohayo, sensei! Refinedstarknet_config
construction.The
starknet_config
method now builds aStarknetConfig
with enhanced clarity, correctly incorporatingdisable_fee
,disable_validate
,fork_rpc_url
,fork_block_number
, and environment options. This refactoring improves maintainability and readability.
Line range hint
340-389
: Ohayo, sensei! Updatedprint_intro
for better startup information.The
print_intro
function now provides more detailed startup logs, including the server address with an eye-catching rocket emoji 🚀. This enhances user experience by clearly displaying important information upon node startup.
462-474
: Ohayo, sensei! Added testtest_starknet_config_default
.Including this test ensures that the default
StarknetConfig
values are correctly set, improving code reliability and catching potential regressions in default configurations.
496-505
: Ohayo, sensei! Added testtest_starknet_config_custom
.This test verifies that custom arguments correctly configure the
StarknetConfig
, enhancing test coverage and ensuring that customization works as intended.crates/katana/node-bindings/src/lib.rs (2)
587-592
: Test function is well implementedOhayo, sensei! The test
can_launch_katana
correctly verifies the default behavior of Katana. The assertions confirm that the default number of accounts and the chain ID are as expected.
625-629
: Specific port test is validOhayo, sensei! The test
can_launch_katana_with_specific_port
effectively checks that Katana launches on the specified port. The assertion ensures the port is set correctly.crates/katana/rpc/rpc/tests/starknet.rs (13)
11-14
: Ohayo sensei! Updated imports look good.The imports of
get_default_test_starknet_config
andSequencerConfig
are correctly updated and align with the new configuration structure.
42-42
: Ohayo sensei! Initialization of TestSequencer is correct.The use of
SequencerConfig::default()
andget_default_test_starknet_config()
inTestSequencer::start
is appropriate and matches the updated configuration methods.
97-97
: Ohayo sensei! Consistent updates in declare_and_deploy_legacy_contract test.The initialization of the
TestSequencer
in this test function has been correctly updated to use the new configuration structures.
155-159
: Ohayo sensei! Fee configuration is properly applied.The
disable_fee
flag is correctly set instarknet_config
, andsequencer_config
is appropriately initialized with the newblock_time
parameter.
242-242
: Ohayo sensei! Configuration updates in estimate_fee test are accurate.The
TestSequencer
is correctly initialized with the updated configuration methods, ensuring consistency across tests.
361-365
: Ohayo sensei! Fee validation test configurations are correct.The
disable_fee
flag is set tofalse
, ensuring that the fee validation logic is properly tested.
395-399
: Ohayo sensei! Fee and validation configurations are appropriately set.The updates accurately configure both
disable_fee
anddisable_validate
flags in the tests for transactions with insufficient fees.
461-465
: Ohayo sensei! Accurate setup for invalid signature tests.The test correctly initializes a new account with a random signer to simulate invalid signatures, aligning with the intended test case.
518-521
: Ohayo sensei! Nonce handling in tests is correctly implemented.The configuration and logic for testing transactions with invalid nonces are properly updated and function as intended.
586-588
: Ohayo sensei! Event retrieval configurations are precise.The
no_mining
flag is correctly set insequencer_config
for testing event retrieval without mining blocks.
672-674
: Ohayo sensei! Pending events test setup is correct.The configurations for testing event retrieval with pending transactions are appropriately updated.
762-766
: Ohayo sensei! Trace test configurations are properly adjusted.The
no_mining
flag and the updated configuration methods are correctly applied in the trace tests.
814-818
: Ohayo sensei! Block trace tests are correctly configured.The initialization of
TestSequencer
with the new configuration ensures accurate testing of block traces.crates/katana/core/src/backend/mod.rs (1)
207-281
: Ohayo, sensei! Ensure tests cover new forking logicThe added forking logic in the
new
method introduces significant changes. It's important to have tests that cover these new pathways to prevent regressions.Would you like assistance in writing tests for the forking functionality?
#[derive(Debug, Default)] | ||
pub struct SequencerConfig { | ||
pub block_time: Option<u64>, | ||
pub no_mining: bool, | ||
pub messaging: Option<crate::service::messaging::MessagingConfig>, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo again, sensei! The SequencerConfig
structure looks solid, but let's consider a few enhancements.
The structure provides a good foundation for sequencer configuration. Here are some suggestions to improve it further:
- Consider adding documentation comments for each field to explain their purpose and expected values.
- The
no_mining
field might be clearer asenable_mining: bool
with an inverted meaning, making it more intuitive. - If
block_time
has a typical default value, consider usingblock_time: u64
with a default implementation instead ofOption<u64>
.
Example improvement:
#[derive(Debug)]
pub struct SequencerConfig {
/// The time between blocks in milliseconds. None means variable block time.
pub block_time: Option<u64>,
/// Whether mining is enabled for this sequencer.
pub enable_mining: bool,
/// Optional messaging configuration.
pub messaging: Option<crate::service::messaging::MessagingConfig>,
}
impl Default for SequencerConfig {
fn default() -> Self {
Self {
block_time: None,
enable_mining: true,
messaging: None,
}
}
}
These changes would make the config more self-documenting and potentially easier to use.
What do you think about these suggestions, sensei?
@@ -77,15 +76,14 @@ impl<'a> TaskBuilder<'a> { | |||
F: Future + Send + 'static, | |||
F::Output: Send + 'static, | |||
{ | |||
let Self { spawner, instrument, graceful_shutdown, name } = self; | |||
let Self { manager, instrument, graceful_shutdown, .. } = self; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Graceful Shutdown Logging Removed
Ohayo, sensei!
It appears that logging for graceful shutdown has been removed and no new logging was found elsewhere. This might impact our ability to debug graceful shutdowns. Was this removal intentional, and if so, how are we handling graceful shutdown logging now?
🔗 Analysis chain
Ohayo! The spawn
method is looking sleek, sensei!
The changes in the spawn
method align perfectly with the TaskManager
refactoring. The cancellation token retrieval has been updated accordingly.
However, I noticed that the logging for graceful shutdown has been removed. Was this intentional? If so, are we planning to handle graceful shutdown logging differently?
To ensure we haven't lost important logging elsewhere, let's check for any new logging related to graceful shutdown:
Also applies to: 84-84, 101-101
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for new logging related to graceful shutdown
rg --type rust 'graceful.*shutdown.*log' crates/
Length of output: 4562
@@ -74,31 +74,28 @@ type BlockProductionWithTxnsFuture = | |||
#[allow(missing_debug_implementations)] | |||
pub struct BlockProducer<EF: ExecutorFactory> { | |||
/// The inner mode of mining. | |||
pub producer: Arc<RwLock<BlockProducerMode<EF>>>, | |||
pub producer: RwLock<BlockProducerMode<EF>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo, sensei! Remaining Arc
usages of BlockProducer
detected.
While the producer
field in block_producer.rs
has been updated to remove Arc
, there are still multiple instances of Arc<BlockProducer<EF>>
across the codebase:
crates/katana/core/src/service/mod.rs
crates/katana/rpc/rpc/src/torii.rs
crates/katana/rpc/rpc/src/saya.rs
crates/katana/rpc/rpc/src/dev.rs
crates/katana/rpc/rpc/src/starknet/mod.rs
crates/katana/node/src/lib.rs
This inconsistency may lead to ownership and thread safety issues. Please review and update these instances to align with the new ownership model.
🔗 Analysis chain
Ohayo, sensei! Simplified ownership model looks good.
The change from Arc<RwLock<BlockProducerMode<EF>>>
to RwLock<BlockProducerMode<EF>>
for the producer
field simplifies the ownership model. This aligns well with the removal of the Clone
implementation for BlockProducer<EF>
. However, be mindful that this change may affect how BlockProducer
instances are shared across threads if that was a previous requirement.
Let's check if there are any remaining uses of Arc
for BlockProducer
in the codebase:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
rg "Arc<BlockProducer" --type rust
Length of output: 1168
#[derive(Deserialize)] | ||
pub struct AccountInfo { | ||
pub balance: String, | ||
pub class_hash: String, | ||
pub private_key: String, | ||
pub public_key: String, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding field validation for AccountInfo
.
The AccountInfo
struct holds critical data like private_key
and public_key
. Implementing validation checks ensures data integrity and enhances security.
You might derive Validate
and add custom validation logic:
use validator::Validate;
#[derive(Deserialize, Validate)]
pub struct AccountInfo {
#[validate(length(min = 1))]
pub balance: String,
// Add similar validations for other fields
}
#[derive(Deserialize)] | ||
pub struct JsonLogFields { | ||
#[serde(deserialize_with = "deserialize_katana_info")] | ||
pub message: KatanaInfo, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure robust deserialization in JsonLogFields
.
While using deserialize_katana_info
for the message
field is effective, consider implementing error handling to manage potential deserialization failures gracefully. This could prevent panics if the log message format changes unexpectedly.
You might update the deserialization function to handle errors:
#[derive(Deserialize)]
pub struct JsonLogFields {
- #[serde(deserialize_with = "deserialize_katana_info")]
+ #[serde(deserialize_with = "deserialize_katana_info", default)]
pub message: KatanaInfo,
}
Additionally, modify deserialize_katana_info
to return an Option<KatanaInfo>
:
fn deserialize_katana_info<'de, D>(deserializer: D) -> Result<Option<KatanaInfo>, D::Error>
where
D: serde::Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
- serde_json::from_str(&s).map_err(serde::de::Error::custom)
+ match serde_json::from_str(&s) {
+ Ok(info) => Ok(Some(info)),
+ Err(_) => Ok(None),
+ }
}
Committable suggestion was skipped due to low confidence.
let starknet_config = get_default_test_starknet_config(); | ||
let sequencer_config = SequencerConfig { block_time, ..Default::default() }; | ||
|
||
let sequencer = TestSequencer::start(sequencer_config, starknet_config).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Consider optimizing concurrency handling.
Using Arc<Mutex<_>>
for shared state in concurrent_transactions_submissions
may introduce contention. Consider using tokio::sync::RwLock
or atomic types for better performance in concurrent scenarios.
Refactor the code to use RwLock
for read-heavy workloads or atomic counters if only simple increments are needed.
.block_number() | ||
.await | ||
.expect("failed to fetch block number from forked network") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Avoid panics by handling errors when fetching the block number
The use of expect()
on line 65 can lead to a panic if fetching the block number fails. It's better to handle the error to ensure the application remains stable.
Suggested change:
- .await
- .expect("failed to fetch block number from forked network")
+ .await
+ .map_err(|err| {
+ // Handle the error, e.g., log and return an error
+ eprintln!("Failed to fetch block number: {:?}", err);
+ err
+ })?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.block_number() | |
.await | |
.expect("failed to fetch block number from forked network") | |
.block_number() | |
.await | |
.map_err(|err| { | |
// Handle the error, e.g., log and return an error | |
eprintln!("Failed to fetch block number: {:?}", err); | |
err | |
})? |
let MaybePendingBlockWithTxHashes::Block(block) = block else { | ||
panic!("block to be forked is a pending block") | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Replace panic!
with proper error handling for pending blocks
Panicking on line 71 when encountering a pending block can crash the application. Consider returning an error instead to handle this case gracefully.
Suggested change:
- let MaybePendingBlockWithTxHashes::Block(block) = block else {
- panic!("block to be forked is a pending block")
- };
+ let block = match block {
+ MaybePendingBlockWithTxHashes::Block(block) => block,
+ _ => {
+ // Handle the pending block case
+ return Err("Cannot fork a pending block".into());
+ }
+ };
Committable suggestion was skipped due to low confidence.
match block.status { | ||
BlockStatus::AcceptedOnL1 => FinalityStatus::AcceptedOnL1, | ||
BlockStatus::AcceptedOnL2 => FinalityStatus::AcceptedOnL2, | ||
_ => panic!("unable to fork for non-accepted block"), | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Prevent panics by handling unsupported block statuses
Using panic!
on line 100 for unsupported block statuses may cause crashes. It's advisable to handle this situation by returning an error.
Suggested change:
_ => panic!("unable to fork for non-accepted block"),
+ _ => {
+ // Handle unsupported block status
+ return Err("Unable to fork non-accepted block".into());
+ },
Committable suggestion was skipped due to low confidence.
|
||
let blockchain: Blockchain = if let Some(forked_url) = &config.fork_rpc_url { | ||
let provider = Arc::new(JsonRpcClient::new(HttpTransport::new(forked_url.clone()))); | ||
let forked_chain_id = provider.chain_id().await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Please handle potential errors instead of using unwrap()
Using unwrap()
on line 57 may cause a panic if provider.chain_id()
fails. To improve reliability, consider handling the error explicitly.
Suggested change:
- let forked_chain_id = provider.chain_id().await.unwrap();
+ let forked_chain_id = match provider.chain_id().await {
+ Ok(chain_id) => chain_id,
+ Err(err) => {
+ // Handle the error appropriately, e.g., log and return an error
+ return Err(err.into());
+ }
+ };
Committable suggestion was skipped due to low confidence.
Description
Reverts some commits related to some issues on slot
Related issue
Tests
Added to documentation?
Checklist
scripts/prettier.sh
,scripts/rust_fmt.sh
,scripts/cairo_fmt.sh
)scripts/clippy.sh
,scripts/docs.sh
)Summary by CodeRabbit
New Features
ServerConfig
for managing server configurations.ApiKind
to represent different APIs supported by Katana.SequencerConfig
for configuring the sequencer.Bug Fixes
Documentation
Chores