Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

revert(katana): revert a bunch of commits #2522

Closed
wants to merge 5 commits into from
Closed

Conversation

steebchen
Copy link
Contributor

@steebchen steebchen commented Oct 11, 2024

Description

Reverts some commits related to some issues on slot

Related issue

Tests

  • Yes
  • No, because they aren't needed
  • No, because I need help

Added to documentation?

  • README.md
  • Dojo Book
  • No documentation needed

Checklist

  • I've formatted my code (scripts/prettier.sh, scripts/rust_fmt.sh, scripts/cairo_fmt.sh)
  • I've linted my code (scripts/clippy.sh, scripts/docs.sh)
  • I've commented my code
  • I've requested a review after addressing the comments

Summary by CodeRabbit

  • New Features

    • Introduced a new struct ServerConfig for managing server configurations.
    • Added a new enum ApiKind to represent different APIs supported by Katana.
    • Implemented a new struct SequencerConfig for configuring the sequencer.
    • Enhanced transaction handling with updated chain ID retrieval methods.
  • Bug Fixes

    • Improved error handling and logging in various API methods.
  • Documentation

    • Updated test configurations to reflect new struct names and methods.
  • Chores

    • Removed deprecated configurations and structures to streamline the codebase.

Copy link

coderabbitai bot commented Oct 11, 2024

Walkthrough

Ohayo, sensei! The changes in this pull request primarily focus on modifications to the CI/CD pipeline for a Rust project, specifically enhancing the configuration and management of the Katana framework. Key updates include the introduction of the KATANA_RUNNER_BIN environment variable in the CI configuration, the removal of the katana-pipeline from the workspace, and various structural updates across multiple modules to streamline configuration handling and improve code clarity. Additionally, several dependencies were added or removed across different Cargo.toml files, reflecting adjustments in project dependencies.

Changes

File Path Change Summary
.github/workflows/ci.yml Added KATANA_RUNNER_BIN environment variable; minor formatting adjustments.
Cargo.toml Removed katana-pipeline member from workspace; minor formatting adjustments.
bin/katana/Cargo.toml Added dependencies: katana-rpc.workspace = true, katana-rpc-api.workspace = true.
bin/katana/src/cli/node.rs Updated NodeArgs and ServerOptions structs; modified methods for node startup and configuration.
crates/dojo-test-utils/Cargo.toml Added dependencies: katana-rpc, katana-rpc-api.
crates/dojo-test-utils/src/sequencer.rs Updated TestSequencer structure; modified start method to accept new config types.
crates/dojo-utils/src/tx/waiter.rs Updated create_test_sequencer method to use new configuration methods.
crates/katana/core/Cargo.toml Added dependency: num-traits.workspace = true.
crates/katana/core/src/backend/config.rs Updated StarknetConfig and Environment structs; added new fields and default implementations.
crates/katana/core/src/backend/mod.rs Removed chain_spec field; added chain_id field in Backend struct; updated new method.
crates/katana/core/src/lib.rs Added new module declaration for sequencer.
crates/katana/core/src/sequencer.rs Introduced SequencerConfig struct marked as deprecated.
crates/katana/core/src/service/block_producer.rs Updated BlockProducer struct to remove Arc wrapper around RwLock.
crates/katana/node-bindings/Cargo.toml Removed dependency: regex.workspace = true.
crates/katana/node-bindings/src/json.rs Renamed JsonLog to JsonLogMessage; updated log parsing structures.
crates/katana/node-bindings/src/lib.rs Updated imports and error handling related to log messages.
crates/katana/node/Cargo.toml Removed dependencies: katana-pipeline, futures, strum, strum_macros.
crates/katana/node/src/config/db.rs Removed DbConfig struct.
crates/katana/node/src/config/dev.rs Removed DevConfig struct.
crates/katana/node/src/config/metrics.rs Removed MetricsConfig struct.
crates/katana/node/src/config/mod.rs Removed multiple modules and Config struct.
crates/katana/node/src/config/rpc.rs Removed RPC configuration constants and RpcConfig struct.
crates/katana/node/src/exit.rs Removed NodeStoppedFuture struct.
crates/katana/node/src/lib.rs Renamed Node struct to Handle; updated methods for node management.
crates/katana/pipeline/Cargo.toml Removed Cargo.toml for katana-pipeline.
crates/katana/pipeline/src/lib.rs Removed Pipeline struct and related functionality.
crates/katana/pipeline/src/stage/mod.rs Removed stage-related definitions and implementations.
crates/katana/pipeline/src/stage/sequencing.rs Removed Sequencing struct and its methods.
crates/katana/pool/src/validation/stateful.rs Updated TxValidator struct to remove Debug derive attribute.
crates/katana/primitives/src/chain_spec.rs Removed ChainSpec struct and its default implementation.
crates/katana/primitives/src/lib.rs Removed chain_spec module.
crates/katana/rpc/rpc-api/src/lib.rs Added new ApiKind enum with variants for supported APIs.
crates/katana/rpc/rpc/src/config.rs Introduced ServerConfig struct for server configuration.
crates/katana/rpc/rpc/src/dev.rs Updated DevApi struct to use Arc for block_producer.
crates/katana/rpc/rpc/src/lib.rs Added new public module declaration for config.
crates/katana/rpc/rpc/src/saya.rs Updated SayaApi struct to use Arc for block_producer.
crates/katana/rpc/rpc/src/starknet/mod.rs Updated StarknetApi struct to use Arc for block_producer.
crates/katana/rpc/rpc/src/starknet/read.rs Modified methods to access chain_id directly from backend.
crates/katana/rpc/rpc/src/starknet/trace.rs Updated transaction simulation methods to use chain_id.
crates/katana/rpc/rpc/src/starknet/write.rs Updated transaction handling methods to use chain_id.
crates/katana/rpc/rpc/src/torii.rs Updated ToriiApi struct to use Arc for block_producer.
crates/katana/rpc/rpc/tests/dev.rs Updated test configurations to use new SequencerConfig.
crates/katana/rpc/rpc/tests/saya.rs Updated test configurations to use SequencerConfig.
crates/katana/rpc/rpc/tests/starknet.rs Updated test configurations to use SequencerConfig.
crates/katana/rpc/rpc/tests/torii.rs Updated test configurations to use SequencerConfig.
crates/katana/tasks/src/lib.rs Introduced TokioTaskSpawner struct for task management.

Possibly related PRs


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🧹 Outside diff range and nitpick comments (31)
crates/katana/rpc/rpc-api/src/lib.rs (1)

5-13: Ohayo, sensei! The new ApiKind enum looks sharp!

The addition of the ApiKind enum is a clean and efficient way to represent the different APIs supported by Katana. The use of #[derive(Debug, Copy, Clone)] is appropriate for this type of enum, allowing for easy debugging and value manipulation.

One small suggestion to enhance the code:

Consider adding a brief doc comment for each variant to provide more context. For example:

pub enum ApiKind {
    /// Starknet API
    Starknet,
    /// Torii API
    Torii,
    /// Development API
    Dev,
    /// Saya API
    Saya,
}

This will make the enum even more self-documenting and helpful for other developers working with this code.

crates/katana/rpc/rpc/src/lib.rs (1)

6-6: Ohayo, sensei! The new config module looks good!

The addition of the pub mod config; declaration is a welcome change. It's properly placed in alphabetical order among the other module declarations.

Consider updating the module-level documentation to include information about this new config module and its purpose within the RPC implementation.

crates/katana/core/src/sequencer.rs (1)

1-3: Ohayo, sensei! Consider documenting the plan for this temporary structure.

The TODO comment and deprecation notice indicate that this SequencerConfig struct is a temporary solution. To ensure smooth future development:

  1. Document the intended replacement (the "dedicated class for building node components").
  2. Create a tracking issue for the removal of this struct and implementation of its replacement.
  3. Add a link to the tracking issue in the TODO comment for easy reference.

This will help maintain code clarity and guide future development efforts.

Would you like assistance in creating a tracking issue for this task, sensei?

crates/katana/rpc/rpc/src/config.rs (2)

5-13: Ohayo, sensei! The ServerConfig struct looks solid!

The struct is well-organized and covers essential server configuration parameters. The use of Option for allowed_origins and metrics is appropriate for optional fields.

Consider adding documentation comments for the struct and its fields to improve code clarity and maintainability. For example:

/// Configuration for the server
#[derive(Debug, Clone)]
pub struct ServerConfig {
    /// Port number for the server to listen on
    pub port: u16,
    /// Hostname or IP address for the server
    pub host: String,
    // ... (add comments for other fields)
}

15-19: Ohayo, sensei! The addr() method looks good!

The addr() method serves a clear purpose and correctly combines the host and port fields.

Consider adding input validation or error handling for cases where the host or port might be invalid. For example:

impl ServerConfig {
    pub fn addr(&self) -> Result<String, std::net::AddrParseError> {
        format!("{}:{}", self.host, self.port).parse::<std::net::SocketAddr>().map(|_| format!("{}:{}", self.host, self.port))
    }
}

This approach ensures that the returned address is valid and provides better error handling.

torii.toml (1)

1-14: Ohayo, sensei! The contract configuration looks well-structured.

The contracts array provides a clear and organized way to define multiple blockchain contracts. Each entry includes the contract type and address, which is essential for interacting with these contracts in the Torii framework.

However, I have a few suggestions to enhance maintainability and clarity:

  1. Consider adding a name field to each contract entry. This would make it easier to reference specific contracts in code without relying on comments or addresses.

  2. It might be beneficial to group contracts by type or project. This could be achieved by using nested arrays or objects.

  3. For improved readability, consider adding empty lines between different projects or contract types.

Here's an example of how these suggestions could be implemented:

contracts = [
    # Number Challenge
    { name = "NumberChallenge", type = "WORLD", address = "0x50c46c6f69ea9f12b24fcdd1569a584ae8893db0c70ea7884b605384da78ebc" },

    # Mainnet Tokens
    { name = "LORDS", type = "ERC20", address = "0x0124aeb495b947201f5fac96fd1138e326ad86195b98df6dec9009158a533b49" },

    # Mainnet NFTs
    { name = "BEASTS", type = "ERC721", address = "0x0158160018d590d93528995b340260e65aedd76d28a686e9daa5c4e8fad0c5dd" },
    { name = "LootSurvivor", type = "ERC721", address = "0x018108b32cea514a78ef1b0e4a0753e855cdf620bc0565202c02456f618c4dc4" },
    { name = "LootSurvivorGoldenToken", type = "ERC721", address = "0x04f5e296c805126637552cf3930e857f380e7c078e8f00696de4fc8545356b1d" },
    { name = "Blobert", type = "ERC721", address = "0x00539f522b29ae9251dbf7443c7a950cf260372e69efab3710a11bf17a9599f1" }
]

What do you think about these suggestions, sensei? They might make the configuration even more maintainable and easier to work with in the long run.

bin/katana/Cargo.toml (1)

14-15: Ohayo, sensei! These new dependencies look sharp!

The addition of katana-rpc and katana-rpc-api as workspace dependencies is a solid move. It suggests we're beefing up our RPC capabilities, which could lead to improved communication and functionality in our Katana framework.

Just a friendly reminder, sensei: As we're expanding our RPC features, let's keep an eye on how this might affect our overall architecture. It might be worth considering if this change will impact our API design or require any additional documentation updates.

crates/katana/rpc/rpc/src/starknet/write.rs (1)

65-65: Ohayo once more, sensei! LGTM with a small suggestion.

The change to use this.inner.backend.chain_id is consistent with the modifications in the other methods, maintaining uniformity across the implementation. The method correctly handles the contract address and transaction hash.

For even better consistency, consider adding an error check similar to the one in add_declare_transaction_impl:

let tx = tx
    .try_into_tx_with_chain_id(this.inner.backend.chain_id)
    .map_err(|_| StarknetApiError::InvalidContractClass)?;

This would provide more robust error handling and align perfectly with the declare transaction method.

crates/katana/core/src/service/mod.rs (1)

35-35: Ohayo, sensei! The Arc wrapper for block_producer is a wise choice.

The change from BlockProducer<EF> to Arc<BlockProducer<EF>> allows for shared ownership of the BlockProducer instance. This modification enhances concurrency and memory safety by enabling multiple parts of the code to access the BlockProducer simultaneously without the need for explicit locking.

Consider documenting this change in the struct's documentation to explain the reasoning behind using Arc and its implications for thread safety and performance.

crates/katana/rpc/rpc/src/saya.rs (1)

32-32: Ohayo once more, sensei! Great update to the new method.

The modification to accept Arc<BlockProducer<EF>> in the new method signature is excellent. It aligns perfectly with the updated struct definition and allows for efficient sharing of the BlockProducer instance.

For consistency with the backend parameter, consider using Arc::clone explicitly:

pub fn new(backend: Arc<Backend<EF>>, block_producer: Arc<BlockProducer<EF>>) -> Self {
    Self {
        backend: Arc::clone(&backend),
        block_producer: Arc::clone(&block_producer),
    }
}

This makes it clear that we're cloning the Arc, not the inner value, and maintains a uniform style across both fields.

crates/katana/rpc/rpc/tests/saya.rs (3)

6-6: Ohayo, sensei! The imports look good, but let's tidy up a bit.

The changes to the imports reflect the updates in the configuration structure and naming conventions. Nice work on keeping things consistent!

Consider grouping related imports together for better readability. For example:

use dojo_test_utils::sequencer::{get_default_test_starknet_config, TestSequencer};
use dojo_utils::TransactionWaiter;

use katana_core::sequencer::SequencerConfig;
use katana_primitives::block::{BlockIdOrTag, BlockTag};
use katana_rpc_api::dev::DevApiClient;
use katana_rpc_api::saya::SayaApiClient;

Also applies to: 9-10


24-27: Ohayo again, sensei! The TestSequencer initialization looks sharp.

The updated configuration using SequencerConfig and get_default_test_starknet_config() is on point. It's consistent with the new imports and reflects the updated configuration structure. Well done!

For improved readability, consider using the struct update syntax more explicitly:

let sequencer = TestSequencer::start(
    SequencerConfig {
        no_mining: true,
        ..get_default_test_starknet_config()
    },
).await;

This makes it clearer that we're using the default config and only overriding the no_mining field.


93-96: Ohayo once more, sensei! The changes here mirror the previous function.

The TestSequencer initialization in this function is consistent with the changes made in fetch_traces_from_block. Great job maintaining consistency across test functions!

Consider extracting the common TestSequencer setup into a helper function to reduce code duplication:

fn setup_test_sequencer() -> TestSequencer {
    TestSequencer::start(
        SequencerConfig {
            no_mining: true,
            ..get_default_test_starknet_config()
        },
    ).await
}

Then you can use it in both test functions like this:

let sequencer = setup_test_sequencer().await;

This would make the tests more DRY and easier to maintain.

.github/workflows/ci.yml (2)

48-48: Ohayo, sensei! Katana's path is now crystal clear!

The addition of the KATANA_RUNNER_BIN environment variable is a wise move. It ensures that the correct Katana binary is used during testing, which can prevent inconsistencies and make our CI more reliable.

However, to make this change even more robust, consider the following suggestions:

  1. Add a check to ensure the binary exists before running the tests.
  2. Consider making this path configurable, perhaps through a GitHub Actions input parameter.

What do you think, sensei? Shall we sharpen our Katana further?

Here's a potential improvement:

- run: |
    KATANA_RUNNER_BIN=/tmp/bins/katana
    if [ ! -f "$KATANA_RUNNER_BIN" ]; then
      echo "Katana binary not found at $KATANA_RUNNER_BIN"
      exit 1
    fi
    cargo llvm-cov nextest --no-report --all-features --workspace --build-jobs 20

Line range hint 1-205: Ohayo again, sensei! Let's polish our CI katana to a mirror shine!

While our CI workflow is already quite sharp, here are some suggestions to make it even more formidable:

  1. Consistency: Consider using the RUST_VERSION environment variable consistently across all jobs that use Rust. This will make it easier to update the Rust version in the future.

  2. Caching: You're already using Swatinem/rust-cache@v2 in some jobs. Consider adding it to all jobs that compile Rust code to speed up the workflow.

  3. Matrix builds: For jobs that test on multiple platforms or Rust versions, consider using matrix builds to reduce duplication.

  4. Parallel jobs: Some jobs could potentially run in parallel. Review the job dependencies and see if you can optimize the workflow execution time.

  5. Documentation: Add comments to explain the purpose of each job, especially for complex steps. This will help future maintainers understand the workflow better.

  6. Conditional jobs: Consider using conditional jobs for tasks that don't need to run on every push or PR, such as documentation builds.

What do you think, sensei? Shall we refine our CI dojo to train even mightier code ninjas?

crates/metrics/src/prometheus_exporter.rs (1)

12-12: Ohayo, sensei! Nice import consolidation!

The change looks good! You've consolidated the imports from metrics_exporter_prometheus, which improves code organization.

However, I noticed that the pub keyword was removed. If PrometheusHandle is still meant to be re-exported from this module, consider keeping it public:

pub use metrics_exporter_prometheus::{PrometheusBuilder, PrometheusHandle};

If not, then this change is perfect as is!

crates/katana/rpc/rpc/tests/torii.rs (2)

28-32: Ohayo again, sensei! The updated TestSequencer configuration looks good!

The transition to SequencerConfig and the use of get_default_test_starknet_config() align well with the project updates. The explicit setting of no_mining: true enhances clarity.

A small suggestion for even better readability:

Consider using the struct update syntax to make it clear which fields are being customized:

SequencerConfig { no_mining: true, ..get_default_test_starknet_config() }

This approach would make it immediately obvious which fields are being overridden from the default configuration.


185-189: Ohayo once more, sensei! The TestSequencer configuration for instant mining is on point!

The update to SequencerConfig and the use of get_default_test_starknet_config() maintain consistency with the previous test. Setting no_mining: false is perfect for this instant mining test case.

For consistency with the previous suggestion:

Consider using the struct update syntax here as well:

SequencerConfig { no_mining: false, ..get_default_test_starknet_config() }

This would maintain a consistent style across both test functions and clearly show which fields are being customized.

crates/dojo-utils/src/tx/waiter.rs (3)

304-307: Ohayo, sensei! Consider updating deprecated imports.

The use of #[allow(deprecated)] suggests that some imported items are deprecated. While this allows the code to compile without warnings, it's generally a good practice to update to non-deprecated alternatives when available.

Could you please investigate if there are any newer, non-deprecated alternatives for these imports? If not, it might be worth documenting why we're still using deprecated items.


322-326: Ohayo, sensei! Changes look good, but let's document the use of deprecated items.

The implementation change for create_test_sequencer looks good, using SequencerConfig::default() and get_default_test_starknet_config(). However, the #[allow(deprecated)] attribute suggests we're using deprecated items.

Could you add a comment explaining why we're still using these deprecated items? This will help future maintainers understand the reasoning behind this decision.


Line range hint 1-478: Ohayo, sensei! Overall, the changes look good but let's plan for the future.

The modifications to the test configuration setup are straightforward and don't introduce any apparent issues. However, the use of deprecated items is a concern that we should address in the future.

Would you like me to create a GitHub issue to track the task of updating these deprecated items in the future? This will ensure we don't forget about it and can plan for the update when appropriate.

crates/katana/rpc/rpc/src/starknet/mod.rs (1)

64-64: Ohayo again, sensei! Great job on maintaining consistency!

The modification of the block_producer parameter type from BlockProducer<EF> to Arc<BlockProducer<EF>> in the new method is spot-on. It aligns perfectly with the change in the Inner struct and allows the caller to share ownership of the BlockProducer with the StarknetApi instance.

For even better clarity, consider adding a brief comment explaining why Arc is used here. Something like:

// Use Arc to allow shared ownership of BlockProducer across multiple threads
block_producer: Arc<BlockProducer<EF>>,

This would help future maintainers understand the rationale behind using Arc at a glance.

crates/katana/rpc/rpc/src/starknet/read.rs (1)

Line range hint 434-474: Ohayo, sensei! LGTM with a small suggestion

The changes look good! The consistent use of this.inner.backend.chain_id and the introduction of should_validate improve code clarity. It's great that you're now respecting the node's disable_validate configuration during fee estimation.

A small suggestion to further improve readability:

Consider extracting the should_validate logic into a separate function. This would make the main estimate_fee function cleaner and easier to understand. Here's a suggested implementation:

+ fn should_validate_transactions(&self, skip_validate: bool) -> bool {
+     #[allow(deprecated)]
+     !(skip_validate || self.inner.backend.config.disable_validate)
+ }

 async fn estimate_fee(
     &self,
     request: Vec<BroadcastedTx>,
     simulation_flags: Vec<SimulationFlagForEstimateFee>,
     block_id: BlockIdOrTag,
 ) -> RpcResult<Vec<FeeEstimate>> {
     self.on_cpu_blocking_task(move |this| {
         // ... (existing code)

         let skip_validate =
             simulation_flags.contains(&SimulationFlagForEstimateFee::SkipValidate);

-        #[allow(deprecated)]
-        let should_validate = !(skip_validate || this.inner.backend.config.disable_validate);
+        let should_validate = this.should_validate_transactions(skip_validate);
         let flags = katana_executor::SimulationFlag {
             skip_validate: !should_validate,
             // ... (rest of the code)
         };

         // ... (existing code)
     })
     .await
 }

This change would encapsulate the validation logic and make it easier to maintain in the future.

crates/katana/core/src/service/block_producer.rs (2)

83-84: LGTM! Consistent with the struct changes, sensei.

The changes in the interval method align well with the updated BlockProducer struct. Removing the Arc wrapper is consistent with the new ownership model.

As a minor suggestion, consider using a more descriptive variable name than prod. For example:

-        let prod = IntervalBlockProducer::new(backend, Some(interval));
-        Self { producer: BlockProducerMode::Interval(prod).into() }
+        let interval_producer = IntervalBlockProducer::new(backend, Some(interval));
+        Self { producer: BlockProducerMode::Interval(interval_producer).into() }

This improves code readability by making the variable's purpose more explicit.


90-91: Ohayo! Consistent changes across methods, well done sensei!

The modifications in on_demand and instant methods are consistent with the changes in the interval method and the updated BlockProducer struct. Good job maintaining consistency!

For improved readability, consider applying the same naming suggestion to both methods:

// In on_demand method
-        let prod = IntervalBlockProducer::new(backend, None);
-        Self { producer: BlockProducerMode::Interval(prod).into() }
+        let on_demand_producer = IntervalBlockProducer::new(backend, None);
+        Self { producer: BlockProducerMode::Interval(on_demand_producer).into() }

// In instant method
-        let prod = InstantBlockProducer::new(backend);
-        Self { producer: BlockProducerMode::Instant(prod).into() }
+        let instant_producer = InstantBlockProducer::new(backend);
+        Self { producer: BlockProducerMode::Instant(instant_producer).into() }

This change would make the code more self-documenting and consistent across all three methods.

Also applies to: 97-98

crates/katana/core/src/backend/config.rs (2)

15-21: Consider adding documentation comments for new fields

Ohayo, sensei! To improve code readability and maintainability, please consider adding documentation comments for the newly added fields in StarknetConfig: disable_fee, disable_validate, db_dir, and genesis.


61-61: Avoid using unwrap() to prevent potential panics

Ohayo, sensei! Using unwrap() on ChainId::parse("KATANA") may cause a panic if the parsing fails. Consider using expect() with a meaningful message or handling the Result to enhance error handling.

Apply this diff to make the change:

- chain_id: ChainId::parse("KATANA").unwrap(),
+ chain_id: ChainId::parse("KATANA").expect("Failed to parse chain ID 'KATANA'")
crates/katana/node/src/lib.rs (3)

Line range hint 115-135: Ohayo, sensei! Handle potential panics when converting gas prices

The conversion of block.l1_gas_price.price_in_fri to u128 using .expect("should fit in u128") will cause a panic if the value doesn't fit. To prevent unexpected crashes, consider handling this conversion more gracefully.

Apply this diff to handle possible conversion errors:

- starknet_config.genesis.gas_prices.strk =
-     block.l1_gas_price.price_in_fri.to_u128().expect("should fit in u128");
+ starknet_config.genesis.gas_prices.strk = block
+     .l1_gas_price
+     .price_in_fri
+     .to_u128()
+     .unwrap_or_default();

Alternatively, add proper error handling to address the conversion issue.


Line range hint 279-288: Ohayo, sensei! Prevent panics by handling invalid origins gracefully

Using .expect("Invalid URI") in parsing allowed origins can cause the server to panic if an invalid URI is provided. To avoid unexpected crashes, consider handling parsing errors and providing informative feedback.

Apply this diff to handle parsing errors gracefully:

let cors =
    config.allowed_origins.clone().map(|allowed_origins| match allowed_origins.as_slice() {
        [origin] if origin == "*" => cors.allow_origin(AllowOrigin::mirror_request()),
        origins => {
+           let mut valid_origins = Vec::new();
+           for o in origins {
+               match o.parse::<Uri>() {
+                   Ok(_) => valid_origins.push(o.parse().unwrap()),
+                   Err(e) => {
+                       eprintln!("Invalid origin '{}': {}", o, e);
+                       // Optionally, log the error or handle it accordingly
+                   }
+               }
+           }
            cors.allow_origin(valid_origins)
        },
    });

231-231: Ohayo, sensei! Confirm task naming consistency

In the block production task, the task is named "BlockProduction". Ensure that task names across the application are consistent and follow the same naming conventions for easier monitoring and debugging.

crates/katana/core/src/backend/mod.rs (1)

105-105: Ohayo, sensei! Modifying a deprecated config field

On line 105, config.env.chain_id is modified even though config is marked as deprecated. Consider whether continuing to modify a deprecated field aligns with your deprecation strategy.

Perhaps introduce a new configuration mechanism or migrate the necessary fields to avoid using the deprecated config.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between fd1e0f9 and a954d7e.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (52)
  • .github/workflows/ci.yml (1 hunks)
  • Cargo.toml (0 hunks)
  • bin/katana/Cargo.toml (1 hunks)
  • bin/katana/src/cli/node.rs (11 hunks)
  • crates/dojo-test-utils/Cargo.toml (1 hunks)
  • crates/dojo-test-utils/src/sequencer.rs (4 hunks)
  • crates/dojo-utils/src/tx/waiter.rs (2 hunks)
  • crates/katana/core/Cargo.toml (1 hunks)
  • crates/katana/core/src/backend/config.rs (2 hunks)
  • crates/katana/core/src/backend/mod.rs (3 hunks)
  • crates/katana/core/src/lib.rs (1 hunks)
  • crates/katana/core/src/sequencer.rs (1 hunks)
  • crates/katana/core/src/service/block_producer.rs (1 hunks)
  • crates/katana/core/src/service/messaging/service.rs (2 hunks)
  • crates/katana/core/src/service/mod.rs (3 hunks)
  • crates/katana/node-bindings/Cargo.toml (0 hunks)
  • crates/katana/node-bindings/src/json.rs (1 hunks)
  • crates/katana/node-bindings/src/lib.rs (6 hunks)
  • crates/katana/node/Cargo.toml (0 hunks)
  • crates/katana/node/src/config/db.rs (0 hunks)
  • crates/katana/node/src/config/dev.rs (0 hunks)
  • crates/katana/node/src/config/metrics.rs (0 hunks)
  • crates/katana/node/src/config/mod.rs (0 hunks)
  • crates/katana/node/src/config/rpc.rs (0 hunks)
  • crates/katana/node/src/exit.rs (0 hunks)
  • crates/katana/node/src/lib.rs (7 hunks)
  • crates/katana/pipeline/Cargo.toml (0 hunks)
  • crates/katana/pipeline/src/lib.rs (0 hunks)
  • crates/katana/pipeline/src/stage/mod.rs (0 hunks)
  • crates/katana/pipeline/src/stage/sequencing.rs (0 hunks)
  • crates/katana/pool/src/validation/stateful.rs (1 hunks)
  • crates/katana/primitives/src/chain_spec.rs (0 hunks)
  • crates/katana/primitives/src/lib.rs (0 hunks)
  • crates/katana/rpc/rpc-api/src/lib.rs (1 hunks)
  • crates/katana/rpc/rpc/src/config.rs (1 hunks)
  • crates/katana/rpc/rpc/src/dev.rs (2 hunks)
  • crates/katana/rpc/rpc/src/lib.rs (1 hunks)
  • crates/katana/rpc/rpc/src/saya.rs (1 hunks)
  • crates/katana/rpc/rpc/src/starknet/mod.rs (1 hunks)
  • crates/katana/rpc/rpc/src/starknet/read.rs (4 hunks)
  • crates/katana/rpc/rpc/src/starknet/trace.rs (2 hunks)
  • crates/katana/rpc/rpc/src/starknet/write.rs (3 hunks)
  • crates/katana/rpc/rpc/src/torii.rs (1 hunks)
  • crates/katana/rpc/rpc/tests/dev.rs (1 hunks)
  • crates/katana/rpc/rpc/tests/saya.rs (3 hunks)
  • crates/katana/rpc/rpc/tests/starknet.rs (14 hunks)
  • crates/katana/rpc/rpc/tests/torii.rs (3 hunks)
  • crates/katana/tasks/src/lib.rs (0 hunks)
  • crates/katana/tasks/src/manager.rs (6 hunks)
  • crates/katana/tasks/src/task.rs (6 hunks)
  • crates/metrics/src/prometheus_exporter.rs (1 hunks)
  • torii.toml (1 hunks)
💤 Files with no reviewable changes (16)
  • Cargo.toml
  • crates/katana/node-bindings/Cargo.toml
  • crates/katana/node/Cargo.toml
  • crates/katana/node/src/config/db.rs
  • crates/katana/node/src/config/dev.rs
  • crates/katana/node/src/config/metrics.rs
  • crates/katana/node/src/config/mod.rs
  • crates/katana/node/src/config/rpc.rs
  • crates/katana/node/src/exit.rs
  • crates/katana/pipeline/Cargo.toml
  • crates/katana/pipeline/src/lib.rs
  • crates/katana/pipeline/src/stage/mod.rs
  • crates/katana/pipeline/src/stage/sequencing.rs
  • crates/katana/primitives/src/chain_spec.rs
  • crates/katana/primitives/src/lib.rs
  • crates/katana/tasks/src/lib.rs
🧰 Additional context used
🔇 Additional comments (81)
crates/katana/core/src/lib.rs (3)

Line range hint 1-1: Ohayo again, sensei! The compiler attribute is a nice touch!

The #![cfg_attr(not(test), warn(unused_crate_dependencies))] attribute is a great practice. It helps keep the dependency tree clean by warning about unused crate dependencies in non-test builds. Excluding test builds from this warning is smart, as test dependencies might not be used in the main code.


Line range hint 1-10: Ohayo once more, sensei! Let's wrap up this review.

The changes to this file, while minimal, are significant. The addition of the sequencer module suggests an expansion of the library's functionality, which could have far-reaching effects on the project. The existing structure and good practices have been maintained, including the alphabetical ordering of modules and the use of compiler attributes for dependency management.

As we move forward, it would be beneficial to:

  1. Ensure proper documentation for the new sequencer module.
  2. Update any relevant tests to cover the new functionality.
  3. Consider the impact of this new module on the overall architecture and performance of the system.

Great job on keeping the codebase clean and well-structured, sensei!


6-6: Ohayo, sensei! The sequencer module addition looks good!

The new sequencer module has been added in alphabetical order, maintaining the existing structure of the file. This addition expands the functionality of the library, which is a positive change.

Let's verify the existence of the sequencer module file:

✅ Verification successful

Ohayo, sensei! Verified the existence of the sequencer module file. Everything looks good!

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the existence of the sequencer module file

# Test: Check if the sequencer.rs file exists in the expected location
fd --type f --full-path "crates/katana/core/src/sequencer.rs"

Length of output: 97

crates/katana/rpc/rpc/src/config.rs (1)

1-3: Ohayo, sensei! LGTM on the imports!

The imports look good and are relevant to the struct fields. Nice job keeping it clean and focused!

torii.toml (2)

1-14: Ohayo once more, sensei! Let's consider the impact on the system.

The introduction of this contracts configuration in torii.toml likely affects how the Torii framework interacts with these blockchain contracts. It's important to ensure that the rest of the system is prepared to use this new configuration format.

To check this, we can search for places in the codebase where these contract addresses might be used:

#!/bin/bash
# Description: Search for potential usage of contract addresses in the codebase

echo "Searching for potential contract address usage:"
echo "-----------------------------------------------"
rg --type rust --type typescript --type javascript '0x[a-fA-F0-9]{64}'
echo "-----------------------------------------------"
echo "Please review these occurrences and ensure they are updated to use the new configuration."

Sensei, could you run this script and verify if there are any hardcoded contract addresses in the codebase that should be updated to use this new configuration? This will help ensure consistency and make future updates easier.


1-14: Ohayo again, sensei! Let's verify the contract addresses.

It's crucial to ensure that the contract addresses are correct and match the intended contracts on the blockchain. While I can't directly verify these addresses, it would be wise to double-check them.

Here's a script to help verify the contract addresses:

This script will extract and display all contract addresses from the torii.toml file. Sensei, could you please run this script and verify each address against the official contract deployment information?

crates/dojo-test-utils/Cargo.toml (1)

20-21: Ohayo, sensei! New katana dependencies look sharp!

The addition of katana-rpc and katana-rpc-api dependencies seems appropriate for enhancing the test utilities. These new katanas will surely slice through any RPC-related testing challenges!

Let's make sure these new dependencies are properly forged into our project, sensei:

crates/katana/core/Cargo.toml (1)

24-24: Ohayo, sensei! LGTM: Adding num-traits dependency.

The addition of num-traits to the dependencies looks good. This crate provides a set of numeric traits that will likely enhance the mathematical capabilities of the katana-core package. Using .workspace = true ensures version consistency across the project, which is a wise choice.

crates/katana/rpc/rpc/src/dev.rs (3)

15-15: Ohayo, sensei! Nice work on enhancing thread safety!

The change from BlockProducer<EF> to Arc<BlockProducer<EF>> is a solid improvement. By wrapping BlockProducer<EF> in an Arc, we're enabling shared ownership and thread-safe reference counting. This modification suggests that block_producer might be shared across multiple threads or stored in multiple places, which is a good practice for concurrent systems.


19-21: Ohayo again, sensei! Excellent consistency in your changes!

The update to the new method signature, accepting Arc<BlockProducer<EF>> instead of BlockProducer<EF>, is spot on. This change aligns perfectly with the modification we saw in the DevApi struct definition. It ensures that the Arc<BlockProducer<EF>> is passed directly into the struct, maintaining the shared ownership model throughout the codebase. Keep up the good work!


96-96: Ohayo once more, sensei! Interesting change in account retrieval!

The modification to fetch accounts from self.backend.config.genesis instead of self.backend.chain_spec.genesis looks good. This change suggests a shift in how we're managing our configuration, possibly centralizing it. Nice move!

However, let's make sure we've got all our ducks in a row:

  1. Is this change consistent across the entire codebase?
  2. Does self.backend.config.genesis provide the same data structure and content as the previous source?

Let's run a quick check to ensure consistency:

These checks will help us confirm the consistency of the change and identify any potential issues.

✅ Verification successful

Ohayo once more, sensei! After running the verification scripts, everything checks out smoothly. There are no lingering references to chain_spec.genesis, and config.genesis is consistently utilized across the codebase with the expected structure. Great job on centralizing the configuration!

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of config.genesis vs chain_spec.genesis

# Test 1: Check for any remaining uses of chain_spec.genesis
echo "Checking for remaining uses of chain_spec.genesis:"
rg --type rust 'chain_spec\.genesis'

# Test 2: Check for uses of config.genesis
echo "Checking for uses of config.genesis:"
rg --type rust 'config\.genesis'

# Test 3: Check the structure of config.genesis
echo "Checking the structure of config.genesis:"
ast-grep --lang rust --pattern 'config.genesis.$_'

Length of output: 4073

crates/katana/rpc/rpc/src/starknet/write.rs (2)

24-24: Ohayo, sensei! LGTM: Chain ID retrieval simplified.

The change to use this.inner.backend.chain_id instead of this.inner.backend.chain_spec.id simplifies the chain ID retrieval process. This modification is consistent with other methods in the file and improves code clarity.


44-45: Ohayo again, sensei! LGTM: Consistent chain ID usage in declare transaction.

The modification to use this.inner.backend.chain_id aligns with the changes made in the add_invoke_transaction_impl method. The error handling for invalid contract class remains intact, ensuring robust transaction processing.

crates/katana/core/src/service/mod.rs (3)

6-6: Ohayo, sensei! The Arc import looks good.

The addition of use std::sync::Arc; is necessary and correctly placed for the changes made to the BlockProductionTask struct.


Line range hint 1-124: Ohayo, sensei! Let's summarize the changes and consider their broader impact.

The introduction of Arc<BlockProducer<EF>> in the BlockProductionTask struct and its constructor is a significant change that enhances thread safety and allows for shared ownership. While the changes in this file are consistent and well-implemented, there are a few points to consider:

  1. The poll method in the Future implementation now operates on an Arc-wrapped block_producer. This change is transparent due to Arc's deref coercion, but it's worth noting for future maintenance.

  2. The TransactionMiner and other parts of this file remain unchanged, which is good for maintaining existing functionality.

  3. These changes might have implications for other parts of the codebase that interact with BlockProductionTask.

To ensure the changes are fully integrated:

  1. Review any code that creates or interacts with BlockProductionTask to ensure it's using Arc correctly.
  2. Consider adding comments or documentation explaining the thread-safety benefits of using Arc for BlockProducer.
  3. If not already done, update any relevant tests to reflect these changes.

Let's check for any other uses of BlockProducer that might need similar Arc wrapping:

#!/bin/bash
# Description: Find all occurrences of BlockProducer to check for potential Arc wrapping needs

# Test: Search for BlockProducer usage
rg --type rust -A 5 'BlockProducer'

Ohayo and arigato for these thoughtful changes, sensei! They contribute to a more robust and concurrent-friendly design.


45-49: Ohayo, sensei! The new method update looks good, but let's verify its usage.

The change in the new method signature to accept Arc<BlockProducer<EF>> is consistent with the modification of the block_producer field. This ensures that the BlockProductionTask is created with a shared BlockProducer instance.

To ensure this change doesn't break existing code, let's verify all calls to this new method:

✅ Verification successful

Ohayo, sensei! It looks like the previous script had a hiccup. Let’s run a corrected version to verify all usages of the new method:


Ohayo, sensei! The new method is being used correctly in the codebase. No issues found with its usage.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all occurrences of BlockProductionTask::new to verify correct usage

# Test: Search for BlockProductionTask::new calls
rg --type rust -A 5 'BlockProductionTask::new'

Length of output: 25557


Script:

#!/bin/bash
# Description: Find all occurrences of BlockProductionTask::new to verify correct usage

# Test: Search for BlockProductionTask::new calls with proper syntax
rg --type rust 'BlockProductionTask::new' -A 5

Length of output: 514

crates/katana/rpc/rpc/src/saya.rs (3)

19-19: Ohayo, sensei! Nice use of Arc for shared ownership.

The change from BlockProducer<EF> to Arc<BlockProducer<EF>> for the block_producer field is a good improvement. This allows for efficient sharing of the BlockProducer instance across multiple owners without the need for deep cloning. It's also consistent with the use of Arc for the backend field, promoting a uniform approach to resource management.


24-27: Ohayo again, sensei! Excellent update to the Clone implementation.

The modification to use Arc::clone for the block_producer field in the Clone implementation is spot-on. This change:

  1. Correctly handles the new Arc wrapper around BlockProducer.
  2. Ensures proper reference counting, avoiding unnecessary deep clones.
  3. Maintains consistency with the cloning approach used for the backend field.

Great job on keeping the implementation aligned with the struct changes!


Line range hint 1-138: Ohayo, sensei! Overall excellent changes to the SayaApi implementation.

The modifications to SayaApi consistently improve the ownership model for the BlockProducer. By using Arc, you've enhanced the efficiency of resource sharing while maintaining clear ownership semantics. The changes are well-thought-out and consistently applied across the struct definition, Clone implementation, and constructor method.

Great job on improving the code quality and resource management!

crates/katana/tasks/src/task.rs (4)

9-9: Ohayo, sensei! These import changes look sharp!

The addition of error from tracing and the update to import TaskManager align well with the refactoring from TaskSpawner to TaskManager. It seems we might be gearing up for some enhanced error logging. Nice move!

Also applies to: 11-11


35-35: Sensei, you've mastered the art of refactoring!

The TaskBuilder struct has been elegantly updated to use TaskManager instead of TaskSpawner. The changes in the field name and the new method signature are consistent and maintain the lifetime parameter. This refactoring aligns perfectly with the overall shift in task management approach. Excellent work!

Also applies to: 47-48


128-129: Sensei, your critical task handling is as sharp as a katana!

The changes in the CriticalTaskBuilder are spot-on, maintaining consistency with the TaskManager refactoring. The logic for handling critical tasks remains intact, which is crucial for maintaining system stability. Great job on keeping everything aligned!


Line range hint 1-158: Ohayo, sensei! Let's wrap up this epic refactoring journey!

The shift from TaskSpawner to TaskManager has been implemented consistently throughout the file. This refactoring maintains the core functionality of task building and spawning while potentially offering new capabilities through the TaskManager.

However, there are a few points to consider:

  1. The removal of the graceful shutdown logging might impact our ability to debug and monitor task lifecycles. We should ensure this information is captured elsewhere if needed.

  2. This change likely has implications for other parts of the system that interact with tasks. It would be wise to verify that all dependent components have been updated accordingly.

  3. The change in the cancellation token retrieval method (manager.on_cancel vs spawner.cancellation_token()) might indicate a change in the cancellation mechanism. We should ensure this doesn't introduce any unexpected behavior.

To ensure a smooth transition, let's run some checks:

#!/bin/bash
# Check for any remaining references to TaskSpawner
echo "Checking for remaining TaskSpawner references:"
rg --type rust 'TaskSpawner' crates/

# Look for potential mismatches in task cancellation handling
echo "Checking for potential mismatches in task cancellation:"
rg --type rust 'cancellation_token|on_cancel' crates/

# Verify consistent use of TaskManager
echo "Verifying consistent use of TaskManager:"
rg --type rust 'TaskManager' crates/

Great work on this refactoring, sensei! It sets a solid foundation for future improvements in our task management system.

crates/katana/rpc/rpc/tests/dev.rs (4)

Line range hint 15-95: Ohayo once more, sensei! Let's examine the test functions.

I'm pleased to see that the core structure and logic of the test functions have remained stable. This is excellent news! Here's why:

  1. Stability in tests: The unchanged nature of these tests suggests that the core functionality being tested hasn't been significantly altered. This is crucial for maintaining the reliability of our test suite.

  2. Consistent coverage: By keeping these tests intact, we ensure that we're still covering the same scenarios and edge cases as before.

  3. Easier review: The stability of these tests makes it easier to focus on the actual changes in the production code, rather than having to scrutinize changes in both the tests and the code they're testing.

However, it's always good to double-check if any new functionality introduced by the configuration changes might require additional test cases.

Let's run a quick check to see if there are any TODO comments or FIXME notes that might indicate areas needing additional testing:

#!/bin/bash
# Check for TODO or FIXME comments in the test files
rg --type rust "TODO|FIXME" crates/katana/rpc/rpc/tests/

If any are found, consider whether they relate to the recent changes and if additional test cases might be needed.


Line range hint 97-122: Ohayo for the last time, sensei! Let's address the commented-out test.

I noticed a commented-out test function at the end of the file. While it's sometimes useful to keep old code around for reference, it can also lead to confusion and clutter. Here are a few thoughts:

  1. Relevance: The test seems to be related to setting storage in "instant mode". Is this functionality still relevant to the current state of the project?

  2. Maintenance: Commented-out code can become outdated quickly as the rest of the codebase evolves. If this test is still relevant, consider updating and uncommenting it.

  3. Version Control: Remember that our version control system (like Git) keeps track of old code. If this test is no longer needed, it might be better to remove it entirely and rely on version history if we need to reference it later.

  4. TODO Comment: If there's a specific reason this test is commented out (e.g., it's waiting for a bug fix or a feature to be implemented), consider adding a TODO comment explaining why it's commented out and when it should be re-enabled.

Consider either updating and uncommenting this test, or removing it if it's no longer relevant. If there's a specific reason it needs to remain commented out for now, please add a TODO comment explaining why.

Let's check if there are other commented-out tests in the project that might need similar attention:

#!/bin/bash
# Check for commented-out test functions
rg --type rust "^//\s*#\[test\]" crates/katana/

This will help us identify if this is an isolated case or if there's a broader pattern of commented-out tests that needs addressing.


1-4: Ohayo, sensei! Let's address the deprecation warning and import changes.

The addition of the deprecation warning and the changes in imports suggest that there have been some structural updates in the project. Here are a few points to consider:

  1. The deprecation warning might indicate that some parts of this test file are using deprecated features. It would be wise to plan for updating these in the future.
  2. The import change from get_default_test_config to get_default_test_starknet_config suggests a more specific configuration setup for Starknet.
  3. The change from SequencingConfig to SequencerConfig might indicate a naming convention update or a change in the underlying structure.

To ensure these changes are consistent across the project, let's run a quick check:

#!/bin/bash
# Check for usage of deprecated config and new config
rg --type rust "get_default_test_config|get_default_test_starknet_config"
rg --type rust "SequencingConfig|SequencerConfig"

9-11: Ohayo again, sensei! Let's look at the create_test_sequencer function changes.

The create_test_sequencer function has undergone some significant changes:

  1. The function is now async, which suggests that it's performing asynchronous operations. This is a good change as it allows for better handling of potentially time-consuming operations.

  2. The way the TestSequencer is started has been updated:

    • Old: TestSequencer::start(get_default_test_config(SequencingConfig::default()))
    • New: TestSequencer::start(SequencerConfig::default(), get_default_test_starknet_config()).await

    This change reflects the new configuration structure we observed in the imports. It separates the sequencer config from the Starknet config, which could provide more flexibility and clarity.

These changes seem to align well with the project's evolution. However, it's important to ensure that all calls to create_test_sequencer across the project have been updated to handle the new async nature of the function.

Let's verify the usage of this function across the project:

If any non-async usage is found, those instances will need to be updated.

✅ Verification successful

Ohayo again, sensei! It seems the previous search didn’t find any instances of create_test_sequencer(). Let’s run a corrected script to locate all usages of this function and ensure they handle the new asynchronous changes properly.


Ohayo again, sensei! I’ve reviewed all instances of create_test_sequencer() and confirmed that each usage appropriately handles the new async nature with .await. Everything looks good!

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for usage of create_test_sequencer
rg --type rust "create_test_sequencer\(\)"

Length of output: 6181


Script:

#!/bin/bash
# Correctly search for usage of create_test_sequencer in Rust files
rg --type rust "create_test_sequencer\(\)" --context 2

Length of output: 2691

crates/metrics/src/prometheus_exporter.rs (1)

Line range hint 1-236: Ohayo again, sensei! Let's talk about the big picture!

While we only had one small change to review, I couldn't help but notice how well-structured this file is! The prometheus_exporter.rs file is doing a great job handling Prometheus metrics. Here's a quick rundown:

  1. The install_recorder function sets up the Prometheus metrics recorder.
  2. The serve function handles serving metrics over HTTP with hooks for database and process metrics.
  3. The serve_with_hooks function provides flexibility for custom metric collection.
  4. Memory statistics collection is conditionally compiled based on the jemalloc feature.

Great work on maintaining a clean and modular structure! It makes the code easy to understand and extend. Keep up the excellent work, sensei!

crates/katana/pool/src/validation/stateful.rs (1)

27-28: Ohayo, sensei! LGTM, but could you clarify the reasoning?

The changes to the TxValidator struct look good. Removing the Debug derive and adding #[allow(missing_debug_implementations)] suggests an intentional decision not to implement Debug.

Could you please clarify the rationale behind this change? Is it for performance reasons or to avoid exposing internal details?

crates/katana/rpc/rpc/tests/torii.rs (2)

7-7: Ohayo, sensei! LGTM on the import updates!

The changes in imports reflect the project's evolution, transitioning from get_default_test_config to get_default_test_starknet_config and from SequencingConfig to SequencerConfig. These updates align well with the broader changes in the project structure and naming conventions.

Also applies to: 10-10


Line range hint 1-280: Ohayo, sensei! Let's wrap up this review!

Overall, the changes in this file are well-implemented and consistent. They successfully transition the tests to use the new SequencerConfig and get_default_test_starknet_config(), aligning with the broader project updates. The core logic of the tests remains intact, which is great for maintaining the integrity of your test suite.

A few key points:

  1. The import updates reflect the project's evolution.
  2. Both test functions now use the updated configuration structure.
  3. The explicit setting of the no_mining parameter in both tests enhances clarity.

The minor suggestions for using the struct update syntax would further improve readability and consistency, but they're not critical changes.

Great job on keeping the tests up-to-date with the project changes, sensei!

crates/katana/rpc/rpc/src/torii.rs (4)

25-25: Ohayo, sensei! Nice use of Arc for shared ownership.

The change from BlockProducer<EF> to Arc<BlockProducer<EF>> is a good move. It allows for shared ownership of the BlockProducer, which can be beneficial in concurrent scenarios. This change suggests that the BlockProducer might be accessed from multiple places simultaneously, improving the overall design for concurrency.


33-33: Ohayo again, sensei! Excellent use of Arc::clone here.

The update to use Arc::clone(&self.block_producer) in the Clone implementation is spot on. This change correctly handles the new Arc<BlockProducer<EF>> type, ensuring that the reference count is properly incremented when cloning the ToriiApi struct. It's a necessary and correct adjustment following the earlier change to the block_producer field type.


39-43: Ohayo once more, sensei! Great job on both functionality and formatting.

The update to the new method is twofold and both changes are commendable:

  1. The parameter type change from BlockProducer<EF> to Arc<BlockProducer<EF>> aligns perfectly with the earlier modification to the struct field. This ensures consistency throughout the implementation.

  2. The improved formatting, with parameters listed on separate lines, enhances code readability. This is particularly beneficial for methods with multiple parameters, making the code easier to read and maintain.

Keep up the excellent work, sensei!


Line range hint 114-114: Ohayo, debugging sensei! Let's clean up those print statements.

I noticed you've added two println! statements, likely for debugging purposes:

  1. println!("ohayo"); (line 114)
  2. println!("txs count: {}", pending_transactions.len()); (line 145)

While these are helpful during development, it's best to remove them before merging to production. Consider using a logging framework for more structured debugging in the future.

Also, given the change to Arc<BlockProducer<EF>>, it might be worth double-checking if any adjustments are needed in this method to properly dereference the Arc when accessing the BlockProducer. The lack of changes here suggests it might be fine, but it's worth a second look to ensure everything works as expected with the new Arc wrapper.

Would you like me to provide a diff for removing these print statements?

Also applies to: 145-145

crates/katana/core/src/service/messaging/service.rs (1)

88-88: Ohayo, sensei! LGTM, but let's verify the change consistency.

The update from backend.chain_spec.id to backend.chain_id looks good and simplifies access to the chain identifier. Nice work on keeping it consistent for both Ethereum and Starknet messaging modes!

To ensure this change is applied consistently across the codebase, let's run a quick check:

This will help us spot any inconsistencies and ensure the refactoring is complete across the project.

Also applies to: 106-106

✅ Verification successful

Ohayo, sensei! Verification Successful.

The chain_spec.id references have been successfully replaced with chain_id across the codebase. No remaining instances of chain_spec.id were found. Great job ensuring consistency!

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of chain_id vs chain_spec.id

# Test 1: Check for any remaining instances of chain_spec.id
echo "Checking for remaining instances of chain_spec.id:"
rg --type rust 'chain_spec\.id'

# Test 2: Verify the usage of chain_id
echo "Verifying usage of chain_id:"
rg --type rust 'chain_id'

Length of output: 21064

crates/katana/rpc/rpc/src/starknet/mod.rs (1)

56-56: Ohayo, sensei! Nice work on enhancing thread safety!

The change from BlockProducer<EF> to Arc<BlockProducer<EF>> is a solid improvement. This modification allows for shared ownership of the BlockProducer, which is particularly beneficial in concurrent scenarios. The use of Arc ensures thread-safe access to the BlockProducer, promoting better resource management and potentially improving performance in multi-threaded environments.

crates/katana/rpc/rpc/src/starknet/read.rs (2)

32-32: Ohayo, sensei! LGTM: Simplified chain_id access

The change to access chain_id directly from self.inner.backend.chain_id looks good. It simplifies the code and maintains consistency with other parts of the implementation.


498-498: Ohayo, sensei! LGTM: Consistent chain_id access

The change to use this.inner.backend.chain_id is consistent with the modifications in other methods. This update maintains uniformity across the implementation.

crates/katana/node-bindings/src/json.rs (3)

19-26: Ohayo, sensei! Excellent refactoring of the JsonLog structure.

Renaming JsonLog to JsonLogMessage and simplifying the generic type improves code clarity and maintainability. This change makes the purpose of the struct more explicit.


41-46: Update all usages of KatanaInfo to include the new address field.

Adding the address field to KatanaInfo enhances the log information. Ensure that any code interacting with KatanaInfo accounts for this new field to prevent potential issues.

Run the following script to locate usages of KatanaInfo and verify they handle the address field:

#!/bin/bash
# Description: Find all instances where `KatanaInfo` is used and display surrounding code.

rg --type rust 'KatanaInfo' -A 3 -B 1

33-39: Verify consistent error handling in deserialization functions.

The custom deserialization function deserialize_katana_info currently maps errors using serde::de::Error::custom. To maintain consistency, ensure that all similar deserialization functions in the codebase handle errors in the same manner.

Run the following script to identify custom deserialization functions and review their error handling:

crates/katana/core/src/backend/config.rs (1)

18-18: Ensure all usages of fork_block_number are updated due to type change

Ohayo, sensei! The field fork_block_number has changed from Option<BlockNumber> to Option<u64>. Please verify that all references to this field have been updated accordingly in the codebase to prevent type mismatches or unexpected behavior.

Run the following script to check for usages of fork_block_number:

✅ Verification successful

Verified: All usages of fork_block_number have been successfully updated to Option<u64>, ensuring type consistency throughout the codebase.

Ohayo, sensei! Great job on updating all the fork_block_number references.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all usages of `fork_block_number` and confirm they are using `Option<u64>`.

# Search for references to `fork_block_number` in the codebase.
rg --type rust 'fork_block_number'

Length of output: 1169

crates/dojo-test-utils/src/sequencer.rs (1)

122-130: LGTM

Ohayo, sensei! The get_default_test_starknet_config function is well-implemented and sets up the StarknetConfig appropriately for testing purposes.

crates/katana/tasks/src/manager.rs (2)

116-116: ⚠️ Potential issue

Calling cancel() in Drop may lead to unexpected cancellations

Ohayo, sensei! In the Drop implementation of TaskManager, calling self.on_cancel.cancel(); will trigger cancellation when any instance of TaskManager is dropped. If there are multiple clones, dropping one will cancel tasks managed by others. This could lead to tasks being cancelled unexpectedly. Please verify if this behavior is intended.


15-15: ⚠️ Potential issue

Potential unintended behavior when cloning TaskManager

Ohayo, sensei! Adding Clone to TaskManager allows instances to be cloned. However, since on_cancel is a shared CancellationToken, cloning the TaskManager results in clones sharing the same cancellation token. This means that cancelling tasks via one clone will affect all other clones, which may not be the intended behavior. Please verify if this is desired.

To verify where TaskManager is cloned in the codebase, you can run the following script:

✅ Verification successful

Ohayo, sensei! It looks like the initial search didn't find any instances where TaskManager is being cloned. To ensure we cover all possible cloning methods, let's run a more comprehensive search.


Ohayo, sensei! Since we didn't find any cloning instances with the previous searches, let's broaden our search to ensure we're not missing anything.


No Cloning Instances Found for TaskManager

Ohayo, sensei! After thoroughly searching the codebase, we found no instances where TaskManager is being cloned. Therefore, adding Clone to TaskManager does not introduce any unintended shared CancellationToken behavior.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all places where TaskManager is cloned.

# Test: Search for `.clone()` calls on `TaskManager` instances. Expect: Identify all clone usages.
rg --type rust 'TaskManager.*\.clone\(\)'

Length of output: 43


Script:

#!/bin/bash
# Description: Find all places where TaskManager is cloned using different methods.

# Method 1: Using .clone() method
rg --type rust 'TaskManager\s*->\s*clone\s*\(\)'

# Method 2: Using Clone::clone(&task_manager)
rg --type rust 'Clone::clone\s*\(&\s*task_manager\s*\)'

Length of output: 108


Script:

#!/bin/bash
# Description: Broadly search for any cloning of TaskManager instances.

# Search for any .clone() calls
rg --type rust '\.clone\s*\(\s*\)' | rg 'TaskManager'

# Search for any Clone::clone calls
rg --type rust 'Clone::clone\s*\(\s*&?\s*TaskManager\s*\)'

Length of output: 114

crates/katana/rpc/rpc/src/starknet/trace.rs (1)

30-30: The chain_id retrieval update looks good

The change to retrieve chain_id from self.inner.backend.chain_id is appropriate and aligns with the backend structure.

crates/katana/node/src/lib.rs (3)

83-87: Ohayo, sensei! Confirm mutation of starknet_config

The start function accepts mut starknet_config: StarknetConfig, and the configuration is modified within the function. Ensure that passing the configuration by mutable reference aligns with the intended design and doesn't introduce side effects elsewhere.


199-211: Ohayo, sensei! Ensure metrics service handles possible database absence

When initializing the metrics service, the code assumes that the db is available. If db is None, the reports vector will be empty, which might affect the metrics collection. Please verify that this behavior is acceptable or handle the None case appropriately.


55-60: Ohayo, sensei! Ensure all necessary fields are included in the Handle struct

The new Handle struct lacks fields like prometheus_handle and db that were present in the previous Node struct. Please verify that omitting these fields doesn't affect any functionality that depends on them.

Run the following script to check for any usages of prometheus_handle and db in the codebase:

✅ Verification successful

Ohayo, sensei! After reviewing the search results, it appears that omitting prometheus_handle and db from the Handle struct does not impact functionality. The backend field likely manages the database interactions, and prometheus_handle seems to be handled appropriately within lib.rs.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Search for references to `prometheus_handle` and `db` outside the provided context.

rg --type rust 'prometheus_handle|db'

Length of output: 126931

bin/katana/src/cli/node.rs (15)

13-13: Ohayo, sensei! Added SocketAddr import is appropriate.

The inclusion of use std::net::SocketAddr; is necessary for handling socket addresses in the server configuration. Good job on adding this import.


35-36: Ohayo, sensei! Correctly added imports for ServerConfig and ApiKind.

The addition of ServerConfig from katana_rpc::config and ApiKind from katana_rpc_api is appropriate for the updated server and API configurations.


99-99: Ohayo, sensei! Added messaging field to NodeArgs.

Including the messaging option in NodeArgs allows for better configuration of messaging with other chains. This enhances the node's flexibility in network interactions.


118-118: Ohayo, sensei! Set default server port to 5050.

Setting the default port to 5050 ensures consistency and reduces the need for manual configuration when deploying the node.


124-124: Ohayo, sensei! Updated host configuration in ServerOptions.

Changing host to an Option<String> without a default value requires careful handling to prevent runtime issues. Verify that host is properly set or defaulted in runtime logic.

Since the server_config method provides a default "0.0.0.0" when host is None, this should prevent any issues. Good attention to detail.


127-127: Ohayo, sensei! Set default max_connections to 100.

Establishing 100 as the default for max_connections is a reasonable choice to handle concurrent connections without overwhelming the server.


223-228: Ohayo, sensei! Refactored node startup sequence.

The execute method now cleanly initializes server_config, sequencer_config, and starknet_config, enhancing readability and separation of concerns. Starting the node with katana_node::start simplifies the startup process.


231-239: Ohayo, sensei! Improved shutdown handling with tokio::select!.

Using tokio::select! to wait for OS signals or the TaskManager shutdown streamlines the shutdown process and ensures graceful termination.


242-243: Ohayo, sensei! Logging shutdown message before stopping the node.

Adding a log statement before shutting down enhances observability, informing users that the node is terminating.


249-251: Ohayo, sensei! Updated log filters in init_logging.

Adjusting the DEFAULT_LOG_FILTER settings refines the logging output, providing more detailed information for specific modules while suppressing unnecessary logs from others. This improves debugging efficiency.


277-289: Ohayo, sensei! Enhanced server_config with default host and API kinds.

The server_config method now defaults the host to "0.0.0.0" if not specified, ensuring the server listens on all interfaces. Including ApiKind::Starknet, ApiKind::Torii, and ApiKind::Saya by default, and ApiKind::Dev in dev mode, provides flexible API exposure.


Line range hint 294-335: Ohayo, sensei! Refined starknet_config construction.

The starknet_config method now builds a StarknetConfig with enhanced clarity, correctly incorporating disable_fee, disable_validate, fork_rpc_url, fork_block_number, and environment options. This refactoring improves maintainability and readability.


Line range hint 340-389: Ohayo, sensei! Updated print_intro for better startup information.

The print_intro function now provides more detailed startup logs, including the server address with an eye-catching rocket emoji 🚀. This enhances user experience by clearly displaying important information upon node startup.


462-474: Ohayo, sensei! Added test test_starknet_config_default.

Including this test ensures that the default StarknetConfig values are correctly set, improving code reliability and catching potential regressions in default configurations.


496-505: Ohayo, sensei! Added test test_starknet_config_custom.

This test verifies that custom arguments correctly configure the StarknetConfig, enhancing test coverage and ensuring that customization works as intended.

crates/katana/node-bindings/src/lib.rs (2)

587-592: Test function is well implemented

Ohayo, sensei! The test can_launch_katana correctly verifies the default behavior of Katana. The assertions confirm that the default number of accounts and the chain ID are as expected.


625-629: Specific port test is valid

Ohayo, sensei! The test can_launch_katana_with_specific_port effectively checks that Katana launches on the specified port. The assertion ensures the port is set correctly.

crates/katana/rpc/rpc/tests/starknet.rs (13)

11-14: Ohayo sensei! Updated imports look good.

The imports of get_default_test_starknet_config and SequencerConfig are correctly updated and align with the new configuration structure.


42-42: Ohayo sensei! Initialization of TestSequencer is correct.

The use of SequencerConfig::default() and get_default_test_starknet_config() in TestSequencer::start is appropriate and matches the updated configuration methods.


97-97: Ohayo sensei! Consistent updates in declare_and_deploy_legacy_contract test.

The initialization of the TestSequencer in this test function has been correctly updated to use the new configuration structures.


155-159: Ohayo sensei! Fee configuration is properly applied.

The disable_fee flag is correctly set in starknet_config, and sequencer_config is appropriately initialized with the new block_time parameter.


242-242: Ohayo sensei! Configuration updates in estimate_fee test are accurate.

The TestSequencer is correctly initialized with the updated configuration methods, ensuring consistency across tests.


361-365: Ohayo sensei! Fee validation test configurations are correct.

The disable_fee flag is set to false, ensuring that the fee validation logic is properly tested.


395-399: Ohayo sensei! Fee and validation configurations are appropriately set.

The updates accurately configure both disable_fee and disable_validate flags in the tests for transactions with insufficient fees.


461-465: Ohayo sensei! Accurate setup for invalid signature tests.

The test correctly initializes a new account with a random signer to simulate invalid signatures, aligning with the intended test case.


518-521: Ohayo sensei! Nonce handling in tests is correctly implemented.

The configuration and logic for testing transactions with invalid nonces are properly updated and function as intended.


586-588: Ohayo sensei! Event retrieval configurations are precise.

The no_mining flag is correctly set in sequencer_config for testing event retrieval without mining blocks.


672-674: Ohayo sensei! Pending events test setup is correct.

The configurations for testing event retrieval with pending transactions are appropriately updated.


762-766: Ohayo sensei! Trace test configurations are properly adjusted.

The no_mining flag and the updated configuration methods are correctly applied in the trace tests.


814-818: Ohayo sensei! Block trace tests are correctly configured.

The initialization of TestSequencer with the new configuration ensures accurate testing of block traces.

crates/katana/core/src/backend/mod.rs (1)

207-281: Ohayo, sensei! Ensure tests cover new forking logic

The added forking logic in the new method introduces significant changes. It's important to have tests that cover these new pathways to prevent regressions.

Would you like assistance in writing tests for the forking functionality?

Comment on lines +4 to +9
#[derive(Debug, Default)]
pub struct SequencerConfig {
pub block_time: Option<u64>,
pub no_mining: bool,
pub messaging: Option<crate::service::messaging::MessagingConfig>,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ohayo again, sensei! The SequencerConfig structure looks solid, but let's consider a few enhancements.

The structure provides a good foundation for sequencer configuration. Here are some suggestions to improve it further:

  1. Consider adding documentation comments for each field to explain their purpose and expected values.
  2. The no_mining field might be clearer as enable_mining: bool with an inverted meaning, making it more intuitive.
  3. If block_time has a typical default value, consider using block_time: u64 with a default implementation instead of Option<u64>.

Example improvement:

#[derive(Debug)]
pub struct SequencerConfig {
    /// The time between blocks in milliseconds. None means variable block time.
    pub block_time: Option<u64>,
    /// Whether mining is enabled for this sequencer.
    pub enable_mining: bool,
    /// Optional messaging configuration.
    pub messaging: Option<crate::service::messaging::MessagingConfig>,
}

impl Default for SequencerConfig {
    fn default() -> Self {
        Self {
            block_time: None,
            enable_mining: true,
            messaging: None,
        }
    }
}

These changes would make the config more self-documenting and potentially easier to use.

What do you think about these suggestions, sensei?

@@ -77,15 +76,14 @@ impl<'a> TaskBuilder<'a> {
F: Future + Send + 'static,
F::Output: Send + 'static,
{
let Self { spawner, instrument, graceful_shutdown, name } = self;
let Self { manager, instrument, graceful_shutdown, .. } = self;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Graceful Shutdown Logging Removed

Ohayo, sensei!

It appears that logging for graceful shutdown has been removed and no new logging was found elsewhere. This might impact our ability to debug graceful shutdowns. Was this removal intentional, and if so, how are we handling graceful shutdown logging now?

🔗 Analysis chain

Ohayo! The spawn method is looking sleek, sensei!

The changes in the spawn method align perfectly with the TaskManager refactoring. The cancellation token retrieval has been updated accordingly.

However, I noticed that the logging for graceful shutdown has been removed. Was this intentional? If so, are we planning to handle graceful shutdown logging differently?

To ensure we haven't lost important logging elsewhere, let's check for any new logging related to graceful shutdown:

Also applies to: 84-84, 101-101

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for new logging related to graceful shutdown
rg --type rust 'graceful.*shutdown.*log' crates/

Length of output: 4562

@@ -74,31 +74,28 @@ type BlockProductionWithTxnsFuture =
#[allow(missing_debug_implementations)]
pub struct BlockProducer<EF: ExecutorFactory> {
/// The inner mode of mining.
pub producer: Arc<RwLock<BlockProducerMode<EF>>>,
pub producer: RwLock<BlockProducerMode<EF>>,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Ohayo, sensei! Remaining Arc usages of BlockProducer detected.

While the producer field in block_producer.rs has been updated to remove Arc, there are still multiple instances of Arc<BlockProducer<EF>> across the codebase:

  • crates/katana/core/src/service/mod.rs
  • crates/katana/rpc/rpc/src/torii.rs
  • crates/katana/rpc/rpc/src/saya.rs
  • crates/katana/rpc/rpc/src/dev.rs
  • crates/katana/rpc/rpc/src/starknet/mod.rs
  • crates/katana/node/src/lib.rs

This inconsistency may lead to ownership and thread safety issues. Please review and update these instances to align with the new ownership model.

🔗 Analysis chain

Ohayo, sensei! Simplified ownership model looks good.

The change from Arc<RwLock<BlockProducerMode<EF>>> to RwLock<BlockProducerMode<EF>> for the producer field simplifies the ownership model. This aligns well with the removal of the Clone implementation for BlockProducer<EF>. However, be mindful that this change may affect how BlockProducer instances are shared across threads if that was a previous requirement.

Let's check if there are any remaining uses of Arc for BlockProducer in the codebase:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
rg "Arc<BlockProducer" --type rust

Length of output: 1168

Comment on lines +48 to 54
#[derive(Deserialize)]
pub struct AccountInfo {
pub balance: String,
pub class_hash: String,
pub private_key: String,
pub public_key: String,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding field validation for AccountInfo.

The AccountInfo struct holds critical data like private_key and public_key. Implementing validation checks ensures data integrity and enhances security.

You might derive Validate and add custom validation logic:

use validator::Validate;

#[derive(Deserialize, Validate)]
pub struct AccountInfo {
    #[validate(length(min = 1))]
    pub balance: String,
    // Add similar validations for other fields
}

Comment on lines +27 to +31
#[derive(Deserialize)]
pub struct JsonLogFields {
#[serde(deserialize_with = "deserialize_katana_info")]
pub message: KatanaInfo,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure robust deserialization in JsonLogFields.

While using deserialize_katana_info for the message field is effective, consider implementing error handling to manage potential deserialization failures gracefully. This could prevent panics if the log message format changes unexpectedly.

You might update the deserialization function to handle errors:

 #[derive(Deserialize)]
 pub struct JsonLogFields {
-    #[serde(deserialize_with = "deserialize_katana_info")]
+    #[serde(deserialize_with = "deserialize_katana_info", default)]
     pub message: KatanaInfo,
 }

Additionally, modify deserialize_katana_info to return an Option<KatanaInfo>:

 fn deserialize_katana_info<'de, D>(deserializer: D) -> Result<Option<KatanaInfo>, D::Error>
 where
     D: serde::Deserializer<'de>,
 {
     let s = String::deserialize(deserializer)?;
-    serde_json::from_str(&s).map_err(serde::de::Error::custom)
+    match serde_json::from_str(&s) {
+        Ok(info) => Ok(Some(info)),
+        Err(_) => Ok(None),
+    }
 }

Committable suggestion was skipped due to low confidence.

Comment on lines +286 to +289
let starknet_config = get_default_test_starknet_config();
let sequencer_config = SequencerConfig { block_time, ..Default::default() };

let sequencer = TestSequencer::start(sequencer_config, starknet_config).await;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ohayo sensei! Consider optimizing concurrency handling.

Using Arc<Mutex<_>> for shared state in concurrent_transactions_submissions may introduce contention. Consider using tokio::sync::RwLock or atomic types for better performance in concurrent scenarios.

Refactor the code to use RwLock for read-heavy workloads or atomic counters if only simple increments are needed.

Comment on lines +63 to +65
.block_number()
.await
.expect("failed to fetch block number from forked network")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! Avoid panics by handling errors when fetching the block number

The use of expect() on line 65 can lead to a panic if fetching the block number fails. It's better to handle the error to ensure the application remains stable.

Suggested change:

-     .await
-     .expect("failed to fetch block number from forked network")
+     .await
+     .map_err(|err| {
+         // Handle the error, e.g., log and return an error
+         eprintln!("Failed to fetch block number: {:?}", err);
+         err
+     })?
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.block_number()
.await
.expect("failed to fetch block number from forked network")
.block_number()
.await
.map_err(|err| {
// Handle the error, e.g., log and return an error
eprintln!("Failed to fetch block number: {:?}", err);
err
})?

Comment on lines +70 to +72
let MaybePendingBlockWithTxHashes::Block(block) = block else {
panic!("block to be forked is a pending block")
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! Replace panic! with proper error handling for pending blocks

Panicking on line 71 when encountering a pending block can crash the application. Consider returning an error instead to handle this case gracefully.

Suggested change:

- let MaybePendingBlockWithTxHashes::Block(block) = block else {
-     panic!("block to be forked is a pending block")
- };
+ let block = match block {
+     MaybePendingBlockWithTxHashes::Block(block) => block,
+     _ => {
+         // Handle the pending block case
+         return Err("Cannot fork a pending block".into());
+     }
+ };

Committable suggestion was skipped due to low confidence.

Comment on lines +97 to +101
match block.status {
BlockStatus::AcceptedOnL1 => FinalityStatus::AcceptedOnL1,
BlockStatus::AcceptedOnL2 => FinalityStatus::AcceptedOnL2,
_ => panic!("unable to fork for non-accepted block"),
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! Prevent panics by handling unsupported block statuses

Using panic! on line 100 for unsupported block statuses may cause crashes. It's advisable to handle this situation by returning an error.

Suggested change:

_ => panic!("unable to fork for non-accepted block"),
+ _ => {
+     // Handle unsupported block status
+     return Err("Unable to fork non-accepted block".into());
+ },

Committable suggestion was skipped due to low confidence.


let blockchain: Blockchain = if let Some(forked_url) = &config.fork_rpc_url {
let provider = Arc::new(JsonRpcClient::new(HttpTransport::new(forked_url.clone())));
let forked_chain_id = provider.chain_id().await.unwrap();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! Please handle potential errors instead of using unwrap()

Using unwrap() on line 57 may cause a panic if provider.chain_id() fails. To improve reliability, consider handling the error explicitly.

Suggested change:

- let forked_chain_id = provider.chain_id().await.unwrap();
+ let forked_chain_id = match provider.chain_id().await {
+     Ok(chain_id) => chain_id,
+     Err(err) => {
+         // Handle the error appropriately, e.g., log and return an error
+         return Err(err.into());
+     }
+ };

Committable suggestion was skipped due to low confidence.

@steebchen steebchen closed this Oct 11, 2024
@steebchen steebchen deleted the temp-check-katana branch October 11, 2024 18:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant