Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug: no store the sync dag block #4259

Open
wants to merge 16 commits into
base: dag-master
Choose a base branch
from
Open

Conversation

jackzhhuang
Copy link
Collaborator

@jackzhhuang jackzhhuang commented Nov 4, 2024

Pull request type

Please check the type of change your PR introduces:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation content changes
  • Other (please describe):

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Other information

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced management of DAG sync blocks during synchronization.
    • New test cases added for BlockDAG functionality, improving test coverage and robustness.
  • Bug Fixes

    • Updated container startup command to include removal of the sync directory, ensuring a cleaner environment.
  • Documentation

    • Added comments and documentation clarifying method behaviors in the GHOSTDAG protocol.
  • Chores

    • Introduced additional debug logging in the pruning point management process for better traceability.
    • Improved logging clarity in the BlockDAG calculations for enhanced debugging.

Copy link

coderabbitai bot commented Nov 4, 2024

Walkthrough

The pull request introduces several modifications across multiple files. Key changes include the addition of a line to delete all DAG sync blocks in the ensure_dag_parent_blocks_exist method of the BlockCollector struct, and updates to the fetch_blocks method to improve block management. Additionally, the starcoin StatefulSet configuration is adjusted to remove the sync directory during startup. The GhostdagManager struct sees a method rename and changes to error handling, while the PruningPointManagerT struct receives enhanced debug logging in the next_pruning_point method.

Changes

File Path Change Summary
sync/src/tasks/block_sync_task.rs - Modified ensure_dag_parent_blocks_exist to delete all DAG sync blocks under certain conditions.
- Updated fetch_blocks to ensure blocks fetched from local store are saved to sync_dag_store.
kube/manifest/starcoin-vega.yaml - Updated container args to include removal of the sync directory during startup.
flexidag/src/ghostdag/protocol.rs - Renamed _verify_and_ghostdata method and altered its error handling logic.
- Removed bail from imports.
flexidag/src/prune/pruning_point_manager.rs - Added debug logging in next_pruning_point to output key parameter values for better traceability.
flexidag/src/blockdag.rs - Enhanced logging in calc_mergeset_and_tips to clarify details about next_ghostdata.
flexidag/tests/tests.rs - Added add_and_print_with_difficulty function for block creation with specified difficulty.
- Modified add_and_print_with_pruning_point to standardize difficulty.
- Introduced test_verification_blue_block_inconsistent and expanded existing tests for better coverage.

Possibly related PRs

Suggested reviewers

  • sanlee42
  • nkysg
  • yourmoonlight

🐇 In the code we hop and play,
Deleting blocks along the way.
With logs so clear, we trace our path,
In sync we dance, avoiding wrath.
A change here, a tweak there,
Together we build, with utmost care! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
sync/src/tasks/block_sync_task.rs (3)

457-457: Add logging for DAG sync block deletion.

The deletion of DAG sync blocks should be logged for better debugging and monitoring. Also, consider moving this cleanup operation to after confirming successful sync of the current batch to prevent potential issues if sync fails.

-                self.sync_dag_store.delete_all_dag_sync_block()?;
+                let deleted_count = self.sync_dag_store.delete_all_dag_sync_block()?;
+                info!(
+                    "Deleted {} DAG sync blocks at checkpoint. block_number: {}, is_target_reached: {}",
+                    deleted_count,
+                    block_header.number(),
+                    block_header.number() >= self.target.target_id.number()
+                );

Line range hint 519-553: Improve error handling and performance in fetch_blocks.

The block fetching and saving logic could benefit from the following improvements:

  1. Better error handling beyond debug logging
  2. Batched operations for local store saves
  3. Metrics for monitoring fetch operations
     async fn fetch_blocks(&self, mut block_ids: Vec<HashValue>) -> Result<Vec<BlockHeader>> {
         let mut result = vec![];
+        let mut failed_saves = 0;
         block_ids.retain(|id| {
             match self.local_store.get_dag_sync_block(*id) {
                 Ok(op_dag_sync_block) => {
                     if let Some(dag_sync_block) = op_dag_sync_block {
                         match self.sync_dag_store.save_block(dag_sync_block.block.clone()) {
                             Ok(_) => {
                                 result.push(dag_sync_block.block.header().clone());
                                 false // read from local store, remove from p2p request
                             }
                             Err(e) => {
-                                debug!("failed to save block for: {:?}", e);
+                                failed_saves += 1;
+                                error!(
+                                    "Failed to save block to sync_dag_store: {:?}, block_id: {:?}",
+                                    e, id
+                                );
                                 true // need retaining
                             }
                         }
                     } else {
                         true // need retaining
                     }
                 }
                 Err(_) => true, // need retaining
             }
         });
+        if failed_saves > 0 {
+            warn!("Failed to save {} blocks to sync_dag_store", failed_saves);
+        }
+
+        // Process remote blocks in batches
         for chunk in block_ids.chunks(usize::try_from(MAX_BLOCK_REQUEST_SIZE)?) {
+            info!("Fetching {} blocks from remote peers", chunk.len());
             let remote_dag_sync_blocks = self.fetcher.fetch_blocks(chunk.to_vec()).await?;
+            let mut save_errors = 0;
             for (block, _) in remote_dag_sync_blocks {
-                self.local_store
+                if let Err(e) = self.local_store
                     .save_dag_sync_block(starcoin_storage::block::DagSyncBlock {
                         block: block.clone(),
                         children: vec![],
-                    })?;
-                self.sync_dag_store.save_block(block.clone())?;
+                    }) {
+                    save_errors += 1;
+                    error!("Failed to save block to local store: {:?}", e);
+                    continue;
+                }
+                if let Err(e) = self.sync_dag_store.save_block(block.clone()) {
+                    save_errors += 1;
+                    error!("Failed to save block to sync_dag_store: {:?}", e);
+                    continue;
+                }
                 result.push(block.header().clone());
             }
+            if save_errors > 0 {
+                warn!("Failed to save {} blocks from remote peers", save_errors);
+            }
         }
         Ok(result)
     }

Line range hint 1-1000: Consider architectural improvements for sync reliability.

The current implementation has several architectural aspects that could be improved:

  1. Storage Layer Consistency:

    • Consider implementing a transaction-like mechanism to ensure consistency between local_store and sync_dag_store
    • Add recovery mechanisms for partial failures during block saves
  2. Error Handling Strategy:

    • Implement a retry mechanism for failed block saves
    • Consider adding circuit breakers for remote fetching
  3. Monitoring:

    • Add metrics for sync progress, success rates, and storage operations
    • Consider implementing health checks for the sync process
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 840c5ab and a6403bc.

📒 Files selected for processing (1)
  • sync/src/tasks/block_sync_task.rs (1 hunks)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (3)
flexidag/src/prune/pruning_point_manager.rs (2)

78-82: Improve debug log formatting for better readability.

While adding debug logging is valuable for troubleshooting, the current format could be improved for readability.

Consider this more structured format:

-        debug!("previous_pruning_point: {:?}, previous_ghostdata: {:?}, next_ghostdata: {:?}, pruning_depth: {:?}, pruning_finality: {:?}", 
-            previous_pruning_point, previous_ghostdata, next_ghostdata,
-            pruning_depth, pruning_finality,
-        );
+        debug!(
+            "Calculating next pruning point:\n\
+             previous_point: {:?}\n\
+             previous_ghost: {:?}\n\
+             next_ghost: {:?}\n\
+             depth: {}\n\
+             finality: {}",
+            previous_pruning_point,
+            previous_ghostdata,
+            next_ghostdata,
+            pruning_depth,
+            pruning_finality
+        );

78-82: Add context about the pruning decision.

To aid in debugging, consider adding the minimum required blue score to the debug log, as this is a key factor in determining the next pruning point.

Add this information to the debug log:

         debug!(
-            "previous_pruning_point: {:?}, previous_ghostdata: {:?}, next_ghostdata: {:?}, pruning_depth: {:?}, pruning_finality: {:?}", 
-            previous_pruning_point, previous_ghostdata, next_ghostdata,
-            pruning_depth, pruning_finality,
+            "Pruning evaluation:\n\
+             min_required_blue_score: {}\n\
+             previous_point: {:?}\n\
+             previous_ghost: {:?}\n\
+             next_ghost: {:?}\n\
+             depth: {}\n\
+             finality: {}",
+            min_required_blue_score_for_next_pruning_point,
+            previous_pruning_point,
+            previous_ghostdata,
+            next_ghostdata,
+            pruning_depth,
+            pruning_finality
         );
flexidag/src/ghostdag/protocol.rs (1)

Line range hint 174-261: Improve error handling consistency

The function has different error handling behavior based on a magic number (10000000):

  • Below this number: Only logs a warning
  • Above this number: Fails with an error

This dual behavior could lead to:

  1. Inconsistent state handling across different block ranges
  2. Difficulty in debugging issues in lower-numbered blocks
  3. Potential consensus issues if verification is skipped

Consider:

  1. Documenting why this threshold exists
  2. Making the threshold configurable
  3. Using consistent error handling regardless of block number
-            if header.number() < 10000000 {
-                // no bail before 10000000
-                warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
-            } else {
-                bail!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
-            }
+            let message = format!(
+                "The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}",
+                header.id(),
+                blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(),
+                new_block_data.mergeset_blues
+            );
+            // Always log the issue
+            warn!("{}", message);
+            // Fail consistently to maintain integrity
+            bail!("{}", message);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between c173559 and 48c9bac.

📒 Files selected for processing (3)
  • flexidag/src/blockdag.rs (1 hunks)
  • flexidag/src/ghostdag/protocol.rs (1 hunks)
  • flexidag/src/prune/pruning_point_manager.rs (1 hunks)

@@ -171,7 +171,7 @@ impl<
Ok(new_block_data)
}

pub(crate) fn verify_and_ghostdata(
pub(crate) fn _verify_and_ghostdata(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Remove underscore prefix from verify_and_ghostdata method

The underscore prefix is inconsistent with the method's usage pattern. The evidence shows:

  1. The method is part of a public API used by other crates (chain/src/chain.rs and chain/src/verifier/mod.rs)
  2. It's called through BlockDag::verify_and_ghostdata which is a public interface
  3. No direct usage of the underscored version exists in the codebase

The underscore prefix suggests internal use, but this is clearly a public-facing API method. Remove the underscore to match its actual visibility and usage pattern.

🔗 Analysis chain

Clarify method visibility intention

The underscore prefix suggests this method is intended for internal/private use, but it's still marked as pub(crate). This creates confusion about the intended visibility scope.

Let's check if this method is still being used by other modules:

Consider either:

  1. Making it truly private by removing pub(crate) if it's only used within this module
  2. Removing the underscore prefix if it's meant to be used by other modules within the crate
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any remaining calls to verify_and_ghostdata
rg "verify_and_ghostdata" --type rust

Length of output: 536


Script:

#!/bin/bash
# Let's check the implementation of verify_and_ghostdata in blockdag.rs to understand the relationship
rg -A 10 "verify_and_ghostdata" flexidag/src/blockdag.rs

# Also check the trait implementation in chain/src/chain.rs
rg -A 10 "verify_and_ghostdata" chain/src/chain.rs

Length of output: 1804


Script:

#!/bin/bash
# Let's check the implementation in ghostdag/protocol.rs to understand the full context
rg -A 20 "_verify_and_ghostdata" flexidag/src/ghostdag/protocol.rs

# Also check if there are any direct usages of the underscore version
rg "_verify_and_ghostdata" --type rust

Length of output: 1085

Comment on lines 564 to 569
_blue_blocks: &[BlockHeader],
header: &BlockHeader,
) -> Result<GhostdagData, anyhow::Error> {
self.ghost_dag_manager()
.verify_and_ghostdata(blue_blocks, header)
self.ghost_dag_manager().ghostdag(&header.parents())
// self.ghost_dag_manager()
// .verify_and_ghostdata(blue_blocks, header)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Method name is misleading

The method name verify_and_ghostdata suggests it performs verification, but the implementation now only retrieves ghost data without verification. Consider renaming the method to better reflect its current behavior.

-pub fn verify_and_ghostdata(
+pub fn get_ghostdata(
     _blue_blocks: &[BlockHeader],
     header: &BlockHeader,
 ) -> Result<GhostdagData, anyhow::Error> {

Committable suggestion skipped: line range outside the PR's diff.


⚠️ Potential issue

Security concern: Block verification has been removed

The implementation no longer verifies the blue blocks before processing them. This could potentially allow invalid blocks to be processed, which may compromise the integrity of the DAG structure.

Consider either:

  1. Restoring the verification logic
  2. Documenting why verification is no longer needed
-    _blue_blocks: &[BlockHeader],
+    blue_blocks: &[BlockHeader],
     header: &BlockHeader,
 ) -> Result<GhostdagData, anyhow::Error> {
-    self.ghost_dag_manager().ghostdag(&header.parents())
-    // self.ghost_dag_manager()
-    //     .verify_and_ghostdata(blue_blocks, header)
+    self.ghost_dag_manager()
+        .verify_and_ghostdata(blue_blocks, header)

Committable suggestion skipped: line range outside the PR's diff.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between c8b259a and 1a6c851.

📒 Files selected for processing (2)
  • flexidag/src/blockdag.rs (1 hunks)
  • flexidag/src/ghostdag/protocol.rs (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • flexidag/src/blockdag.rs
🔇 Additional comments (2)
flexidag/src/ghostdag/protocol.rs (2)

5-5: LGTM: Import changes align with new error handling.

The removal of bail from anyhow imports is consistent with the new error handling approach.


174-174: Remove underscore prefix from method name.

As noted in past reviews, this method is part of a public API used by other crates. The underscore prefix suggests internal use, which is inconsistent with its actual usage pattern.

Comment on lines 224 to 225
warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
return self.ghostdag(&header.parents_hash());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve error handling and logging.

The current implementation has several issues:

  1. Silently falling back to ghostdag when blue sets don't match could mask underlying problems
  2. The warning message could be more descriptive about why the blue sets don't match

Consider this alternative implementation:

-            warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
-            return self.ghostdag(&header.parents_hash());
+            let expected_blues: HashSet<_> = blue_blocks.iter().map(|header| header.id()).collect();
+            let actual_blues: HashSet<_> = new_block_data.mergeset_blues.iter().skip(1).cloned().collect();
+            let missing_blues: Vec<_> = expected_blues.difference(&actual_blues).collect();
+            let extra_blues: Vec<_> = actual_blues.difference(&expected_blues).collect();
+            
+            warn!(
+                "Blue set mismatch for block {:?}:\nMissing blues: {:?}\nUnexpected blues: {:?}",
+                header.id(),
+                missing_blues,
+                extra_blues
+            );
+            
+            return Err(anyhow::anyhow!(
+                "Blue set verification failed for block {:?}",
+                header.id()
+            ));

This change:

  1. Provides more detailed logging about which blues are missing or unexpected
  2. Propagates the error instead of silently falling back
  3. Makes debugging easier by exposing the actual discrepancy

Committable suggestion skipped: line range outside the PR's diff.

Copy link

github-actions bot commented Nov 6, 2024

Benchmark for 9f9cda4

Click to view benchmark
Test Base PR %
accumulator_append 771.2±78.56µs 812.6±135.63µs +5.37%
block_apply/block_apply_10 384.3±14.16ms 362.5±7.78ms -5.67%
block_apply/block_apply_1000 43.0±1.73s 42.8±2.47s -0.47%
get_with_proof/db_store 44.9±7.02µs 43.5±3.38µs -3.12%
get_with_proof/mem_store 36.8±4.12µs 35.2±1.53µs -4.35%
put_and_commit/db_store/1 123.5±16.10µs 139.1±23.30µs +12.63%
put_and_commit/db_store/10 1006.3±47.14µs 1237.6±284.16µs +22.99%
put_and_commit/db_store/100 10.0±0.57ms 9.7±0.46ms -3.00%
put_and_commit/db_store/5 541.6±41.27µs 554.2±45.44µs +2.33%
put_and_commit/db_store/50 5.0±0.25ms 5.0±0.29ms 0.00%
put_and_commit/mem_store/1 74.5±9.69µs 71.2±7.97µs -4.43%
put_and_commit/mem_store/10 660.8±72.31µs 719.2±131.51µs +8.84%
put_and_commit/mem_store/100 6.6±0.58ms 6.6±0.73ms 0.00%
put_and_commit/mem_store/5 498.4±106.72µs 368.0±65.08µs -26.16%
put_and_commit/mem_store/50 3.5±0.53ms 3.3±0.31ms -5.71%
query_block/query_block_in(10)_times(100) 8.5±0.48ms 9.5±0.86ms +11.76%
query_block/query_block_in(10)_times(1000) 84.6±4.35ms 87.3±9.12ms +3.19%
query_block/query_block_in(10)_times(10000) 817.8±28.50ms 862.0±48.80ms +5.40%
query_block/query_block_in(1000)_times(100) 1198.0±70.32µs 1275.4±76.09µs +6.46%
query_block/query_block_in(1000)_times(1000) 12.7±0.97ms 12.0±0.31ms -5.51%
query_block/query_block_in(1000)_times(10000) 135.4±7.51ms 123.6±5.51ms -8.71%
storage_transaction 1181.6±457.20µs 1014.8±366.48µs -14.12%
vm/transaction_execution/1 413.2±29.83ms 412.3±9.98ms -0.22%
vm/transaction_execution/10 132.3±10.79ms 139.0±14.66ms +5.06%
vm/transaction_execution/20 133.5±8.85ms 119.8±6.03ms -10.26%
vm/transaction_execution/5 176.0±18.32ms 165.7±8.61ms -5.85%
vm/transaction_execution/50 157.2±12.25ms 141.7±6.37ms -9.86%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 1a6c851 and 264a5d6.

📒 Files selected for processing (1)
  • flexidag/src/blockdag.rs (1 hunks)
🔇 Additional comments (1)
flexidag/src/blockdag.rs (1)

567-579: Method name is still misleading.

The method name suggests it performs verification and returns ghost data, but it specifically verifies blue blocks matching. This was previously flagged in a past review.

Consider renaming to better reflect its purpose:

-    pub fn verify_and_ghostdata(
+    pub fn verify_blue_blocks_and_get_ghostdata(

Comment on lines 567 to 579
let ghostdata = self.ghost_dag_manager().ghostdag(&header.parents())?;
if ghostdata
.mergeset_blues
.iter()
.skip(1)
.cloned()
.collect::<HashSet<_>>()
== blue_blocks.iter().map(|b| b.id()).collect::<HashSet<_>>()
{
Ok(ghostdata)
} else {
bail!("blue blocks are not correct for remote blue block: {:?}, local blue blocks: {:?}, header: {:?}", blue_blocks, ghostdata.mergeset_blues, header);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix potential panic and improve performance.

The current implementation has several issues:

  1. skip(1) could panic if mergeset_blues is empty
  2. Creating two HashSets is inefficient
  3. Error message could be more structured

Consider this improved implementation:

-        let ghostdata = self.ghost_dag_manager().ghostdag(&header.parents())?;
-        if ghostdata
-            .mergeset_blues
-            .iter()
-            .skip(1)
-            .cloned()
-            .collect::<HashSet<_>>()
-            == blue_blocks.iter().map(|b| b.id()).collect::<HashSet<_>>()
-        {
-            Ok(ghostdata)
-        } else {
-            bail!("blue blocks are not correct for remote blue block: {:?}, local blue blocks: {:?}, header: {:?}", blue_blocks, ghostdata.mergeset_blues, header);
-        }
+        let ghostdata = self.ghost_dag_manager().ghostdag(&header.parents())?;
+        let blues = &ghostdata.mergeset_blues;
+        
+        // Skip first blue block only if there are any blue blocks
+        let local_blues: HashSet<_> = blues.iter().skip(!blues.is_empty() as usize).cloned().collect();
+        let remote_blues: HashSet<_> = blue_blocks.iter().map(|b| b.id()).collect();
+        
+        if local_blues == remote_blues {
+            Ok(ghostdata)
+        } else {
+            bail!(
+                "Blue blocks mismatch. Remote blue blocks: {:?}, Local blue blocks: {:?}, Header: {:?}",
+                remote_blues,
+                local_blues,
+                header
+            );
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let ghostdata = self.ghost_dag_manager().ghostdag(&header.parents())?;
if ghostdata
.mergeset_blues
.iter()
.skip(1)
.cloned()
.collect::<HashSet<_>>()
== blue_blocks.iter().map(|b| b.id()).collect::<HashSet<_>>()
{
Ok(ghostdata)
} else {
bail!("blue blocks are not correct for remote blue block: {:?}, local blue blocks: {:?}, header: {:?}", blue_blocks, ghostdata.mergeset_blues, header);
}
let ghostdata = self.ghost_dag_manager().ghostdag(&header.parents())?;
let blues = &ghostdata.mergeset_blues;
// Skip first blue block only if there are any blue blocks
let local_blues: HashSet<_> = blues.iter().skip(!blues.is_empty() as usize).cloned().collect();
let remote_blues: HashSet<_> = blue_blocks.iter().map(|b| b.id()).collect();
if local_blues == remote_blues {
Ok(ghostdata)
} else {
bail!(
"Blue blocks mismatch. Remote blue blocks: {:?}, Local blue blocks: {:?}, Header: {:?}",
remote_blues,
local_blues,
header
);
}

Copy link

github-actions bot commented Nov 6, 2024

Benchmark for 0d17403

Click to view benchmark
Test Base PR %
accumulator_append 802.6±125.80µs 825.5±108.35µs +2.85%
block_apply/block_apply_10 378.4±20.63ms 495.5±32.93ms +30.95%
block_apply/block_apply_1000 46.4±2.67s 44.0±2.47s -5.17%
get_with_proof/db_store 50.8±8.94µs 56.7±14.49µs +11.61%
get_with_proof/mem_store 39.2±4.97µs 39.3±4.60µs +0.26%
put_and_commit/db_store/1 150.3±27.01µs 116.7±8.92µs -22.36%
put_and_commit/db_store/10 1152.1±134.09µs 1197.2±153.15µs +3.91%
put_and_commit/db_store/100 11.3±1.89ms 11.3±1.33ms 0.00%
put_and_commit/db_store/5 579.9±85.85µs 558.6±53.95µs -3.67%
put_and_commit/db_store/50 5.9±1.05ms 6.0±0.91ms +1.69%
put_and_commit/mem_store/1 80.5±16.91µs 75.2±10.40µs -6.58%
put_and_commit/mem_store/10 664.6±92.98µs 748.4±116.85µs +12.61%
put_and_commit/mem_store/100 7.0±0.76ms 7.2±1.20ms +2.86%
put_and_commit/mem_store/5 453.8±83.25µs 337.8±43.92µs -25.56%
put_and_commit/mem_store/50 4.0±0.66ms 3.7±0.69ms -7.50%
query_block/query_block_in(10)_times(100) 9.2±0.82ms 9.0±1.55ms -2.17%
query_block/query_block_in(10)_times(1000) 100.6±11.64ms 133.2±17.13ms +32.41%
query_block/query_block_in(10)_times(10000) 962.7±132.66ms 902.5±65.17ms -6.25%
query_block/query_block_in(1000)_times(100) 1585.8±315.02µs 1522.8±186.30µs -3.97%
query_block/query_block_in(1000)_times(1000) 14.2±1.14ms 15.5±1.23ms +9.15%
query_block/query_block_in(1000)_times(10000) 158.0±15.58ms 151.0±20.18ms -4.43%
storage_transaction 1169.8±493.61µs 1448.6±647.68µs +23.83%
vm/transaction_execution/1 409.3±22.74ms 455.3±53.85ms +11.24%
vm/transaction_execution/10 130.8±6.29ms 156.3±15.04ms +19.50%
vm/transaction_execution/20 128.5±9.96ms 135.7±14.78ms +5.60%
vm/transaction_execution/5 167.1±14.28ms 175.1±10.11ms +4.79%
vm/transaction_execution/50 146.1±6.81ms 158.0±14.88ms +8.15%

Copy link

github-actions bot commented Nov 7, 2024

Benchmark for 5c1818c

Click to view benchmark
Test Base PR %
accumulator_append 878.5±157.92µs 845.2±138.59µs -3.79%
block_apply/block_apply_10 422.9±38.41ms 445.8±55.12ms +5.41%
block_apply/block_apply_1000 44.7±2.10s 43.7±1.76s -2.24%
get_with_proof/db_store 47.8±7.53µs 62.0±12.83µs +29.71%
get_with_proof/mem_store 36.5±2.38µs 47.4±7.26µs +29.86%
put_and_commit/db_store/1 135.4±14.26µs 161.9±26.57µs +19.57%
put_and_commit/db_store/10 1332.0±275.31µs 1162.1±119.43µs -12.76%
put_and_commit/db_store/100 10.3±1.09ms 12.3±2.66ms +19.42%
put_and_commit/db_store/5 632.4±82.86µs 660.1±105.58µs +4.38%
put_and_commit/db_store/50 7.3±2.28ms 5.7±0.90ms -21.92%
put_and_commit/mem_store/1 72.5±9.74µs 84.8±15.46µs +16.97%
put_and_commit/mem_store/10 724.7±115.58µs 733.7±149.98µs +1.24%
put_and_commit/mem_store/100 7.2±1.43ms 7.3±1.33ms +1.39%
put_and_commit/mem_store/5 402.2±88.67µs 430.1±84.07µs +6.94%
put_and_commit/mem_store/50 3.4±0.40ms 3.5±0.54ms +2.94%
query_block/query_block_in(10)_times(100) 9.0±0.59ms 9.1±0.49ms +1.11%
query_block/query_block_in(10)_times(1000) 87.1±8.52ms 91.3±7.97ms +4.82%
query_block/query_block_in(10)_times(10000) 971.9±72.33ms 1117.2±174.09ms +14.95%
query_block/query_block_in(1000)_times(100) 1633.0±304.66µs 2.3±0.35ms +40.85%
query_block/query_block_in(1000)_times(1000) 13.7±1.69ms 18.5±2.69ms +35.04%
query_block/query_block_in(1000)_times(10000) 132.8±7.82ms 143.7±16.16ms +8.21%
storage_transaction 1115.3±389.06µs 1114.8±405.68µs -0.04%
vm/transaction_execution/1 458.2±31.80ms 465.4±53.29ms +1.57%
vm/transaction_execution/10 147.0±9.78ms 179.3±24.80ms +21.97%
vm/transaction_execution/20 150.3±10.25ms 143.3±8.53ms -4.66%
vm/transaction_execution/5 197.7±12.41ms 171.9±10.44ms -13.05%
vm/transaction_execution/50 150.2±10.23ms 146.5±9.62ms -2.46%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
flexidag/src/ghostdag/protocol.rs (2)

223-224: Document the skip(1) behavior

The comparison correctly skips the first element (selected parent) when comparing blue sets, but this behavior should be documented to explain why it's necessary.

Add a comment explaining the skip:

        if new_block_data
            .mergeset_blues
            .iter()
+           // Skip the selected parent as it's always blue in GHOSTDAG protocol
            .skip(1)
            .cloned()
            .collect::<HashSet<_>>()
            != remote_blue_set

225-235: Enhance error messages and logging

The current implementation has good error recovery flow but could benefit from more descriptive messages:

  1. The warning and error messages are too similar
  2. The messages don't clearly indicate the difference between initial verification and fallback verification

Consider this improvement:

-            warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
+            warn!(
+                "Initial blue set verification failed for block {:?}. Expected blues: {:?}, Computed blues: {:?}. Attempting recomputation...",
+                header.id(),
+                remote_blue_set,
+                new_block_data.mergeset_blues.iter().skip(1).cloned().collect::<Vec<_>>()
+            );
             let ghostdata = self.ghostdag(&header.parents_hash())?;
             if ghostdata
                 .mergeset_blues
                 .iter()
                 .skip(1)
                 .cloned()
                 .collect::<HashSet<_>>()
                 != remote_blue_set
             {
-                bail!("The ghost data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), ghostdata.mergeset_blues);
+                bail!(
+                    "Blue set verification failed after recomputation for block {:?}. Expected blues: {:?}, Recomputed blues: {:?}",
+                    header.id(),
+                    remote_blue_set,
+                    ghostdata.mergeset_blues.iter().skip(1).cloned().collect::<Vec<_>>()
+                );
             }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3b2a145 and 197bc4f.

📒 Files selected for processing (1)
  • flexidag/src/ghostdag/protocol.rs (1 hunks)
🔇 Additional comments (2)
flexidag/src/ghostdag/protocol.rs (2)

213-216: LGTM: Clean and idiomatic HashSet creation

The code efficiently creates a HashSet of block IDs from the blue blocks, which is perfect for the subsequent set comparison operations.


213-235: Consider addressing the method naming consistency

As noted in a previous review, the method name verify_and_ghostdata should be consistent with its public usage pattern. Since this PR modifies the method's implementation, it would be a good opportunity to address the naming convention.

Let's verify the method's usage pattern:

Consider removing the underscore prefix if the method is part of the public API.

Copy link

github-actions bot commented Nov 7, 2024

Benchmark for 4cdd21e

Click to view benchmark
Test Base PR %
accumulator_append 902.3±173.52µs 886.7±184.57µs -1.73%
block_apply/block_apply_10 410.9±21.79ms 381.3±30.82ms -7.20%
block_apply/block_apply_1000 45.4±2.46s 44.3±1.53s -2.42%
get_with_proof/db_store 44.1±3.00µs 48.9±7.53µs +10.88%
get_with_proof/mem_store 37.5±5.87µs 47.5±10.12µs +26.67%
put_and_commit/db_store/1 145.1±31.40µs 162.8±26.19µs +12.20%
put_and_commit/db_store/10 1074.8±73.40µs 1154.2±199.19µs +7.39%
put_and_commit/db_store/100 11.0±2.29ms 11.2±1.73ms +1.82%
put_and_commit/db_store/5 565.3±46.61µs 556.4±45.06µs -1.57%
put_and_commit/db_store/50 5.3±0.34ms 5.7±0.81ms +7.55%
put_and_commit/mem_store/1 78.7±12.73µs 75.9±10.55µs -3.56%
put_and_commit/mem_store/10 692.1±103.78µs 701.9±76.23µs +1.42%
put_and_commit/mem_store/100 6.3±0.34ms 7.0±1.27ms +11.11%
put_and_commit/mem_store/5 340.6±43.30µs 336.9±39.65µs -1.09%
put_and_commit/mem_store/50 3.3±0.48ms 3.3±0.38ms 0.00%
query_block/query_block_in(10)_times(100) 8.5±0.57ms 9.2±0.91ms +8.24%
query_block/query_block_in(10)_times(1000) 106.1±10.06ms 94.7±12.41ms -10.74%
query_block/query_block_in(10)_times(10000) 956.4±125.04ms 950.1±73.71ms -0.66%
query_block/query_block_in(1000)_times(100) 1321.5±147.15µs 1290.1±56.38µs -2.38%
query_block/query_block_in(1000)_times(1000) 13.8±1.41ms 15.4±3.67ms +11.59%
query_block/query_block_in(1000)_times(10000) 128.0±8.81ms 129.5±6.20ms +1.17%
storage_transaction 1273.6±505.27µs 1286.9±484.77µs +1.04%
vm/transaction_execution/1 419.5±26.06ms 408.9±19.77ms -2.53%
vm/transaction_execution/10 142.9±7.35ms 137.5±10.73ms -3.78%
vm/transaction_execution/20 145.3±33.90ms 126.1±8.46ms -13.21%
vm/transaction_execution/5 181.7±12.78ms 172.2±25.38ms -5.23%
vm/transaction_execution/50 167.2±14.01ms 149.9±9.51ms -10.35%

Copy link

github-actions bot commented Nov 8, 2024

Benchmark for 97a16cb

Click to view benchmark
Test Base PR %
accumulator_append 840.6±143.24µs 835.8±163.26µs -0.57%
block_apply/block_apply_10 378.9±15.68ms 377.8±17.58ms -0.29%
block_apply/block_apply_1000 46.8±4.70s 40.9±1.33s -12.61%
get_with_proof/db_store 48.0±8.04µs 45.3±3.89µs -5.63%
get_with_proof/mem_store 37.5±3.21µs 38.7±6.44µs +3.20%
put_and_commit/db_store/1 153.2±27.71µs 118.4±11.59µs -22.72%
put_and_commit/db_store/10 1045.3±68.93µs 1085.0±63.50µs +3.80%
put_and_commit/db_store/100 9.8±1.54ms 10.8±1.36ms +10.20%
put_and_commit/db_store/5 604.7±108.62µs 528.9±30.26µs -12.54%
put_and_commit/db_store/50 6.5±1.42ms 5.2±0.50ms -20.00%
put_and_commit/mem_store/1 75.4±10.59µs 73.2±11.19µs -2.92%
put_and_commit/mem_store/10 697.3±94.99µs 679.0±90.41µs -2.62%
put_and_commit/mem_store/100 9.2±1.72ms 7.0±1.22ms -23.91%
put_and_commit/mem_store/5 378.2±93.14µs 327.3±32.60µs -13.46%
put_and_commit/mem_store/50 3.5±0.72ms 3.3±0.51ms -5.71%
query_block/query_block_in(10)_times(100) 10.8±2.07ms 10.2±2.47ms -5.56%
query_block/query_block_in(10)_times(1000) 105.5±15.44ms 86.6±7.97ms -17.91%
query_block/query_block_in(10)_times(10000) 959.8±47.83ms 828.4±31.76ms -13.69%
query_block/query_block_in(1000)_times(100) 1199.1±49.11µs 1286.8±93.64µs +7.31%
query_block/query_block_in(1000)_times(1000) 12.3±0.57ms 14.7±2.08ms +19.51%
query_block/query_block_in(1000)_times(10000) 123.6±4.32ms 126.7±6.94ms +2.51%
storage_transaction 1106.0±386.50µs 1119.0±331.35µs +1.18%
vm/transaction_execution/1 408.5±10.26ms 413.9±15.00ms +1.32%
vm/transaction_execution/10 144.7±10.52ms 140.1±8.53ms -3.18%
vm/transaction_execution/20 119.2±6.23ms 127.2±11.57ms +6.71%
vm/transaction_execution/5 160.8±7.16ms 180.5±12.78ms +12.25%
vm/transaction_execution/50 150.3±9.40ms 153.8±14.98ms +2.33%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (3)
flexidag/src/ghostdag/protocol.rs (1)

Line range hint 1-1: Consider future improvements to the codebase

Several areas could benefit from future improvements:

  1. The TODO comment in blue_anticone_size method suggests a needed refactor
  2. Error handling could be more consistent across methods
  3. Complex logic (especially in check_blue_candidate) could benefit from additional documentation

Consider creating separate issues to track these improvements for future iterations.

flexidag/tests/tests.rs (2)

739-770: Clean up commented code and unused variables to improve readability

The function contains multiple commented-out lines and variables that are no longer used. This can clutter the code and make maintenance more difficult.

Consider removing unnecessary comments and variables to streamline the function:

-        let start = Instant::now();
-        // dag.commit(header.to_owned(), origin)?;
-        let duration = start.elapsed();
-        println!(
-            "commit header: {:?}, number: {:?}, duration: {:?}",
-            header.id(),
-            header.number(),
-            duration
-        );
-        // let ghostdata = dag.ghostdata(&[header.id()])?;
-        // let ghostdata = dag.ghostdata_by_hash(header.id())?.unwrap();
-        // println!(
-        //     "add a header: {:?}, selected_parent: {:?}, blue set: {:?}, red set: {:?}, blue anticone size: {:?}",
-        //     header, ghostdata.selected_parent, ghostdata.mergeset_blues, ghostdata.mergeset_reds, ghostdata.blues_anticone_sizes
-        // );
+        dag.commit(header.to_owned(), origin)?;
+        Ok(header)

1125-1155: Optimize the block creation loop for performance

The loop that creates 500 blocks sequentially may lead to long test execution times:

for i in 0..500 {
    // Block creation and DAG commit logic
}

Consider optimizing the loop by:

  • Reducing the number of iterations if possible.
  • Using asynchronous operations to parallelize block creation and commits.
  • Batching commits to minimize the overhead of individual operations.

This can improve test performance without sacrificing coverage.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between adfcb23 and e06797d.

📒 Files selected for processing (2)
  • flexidag/src/ghostdag/protocol.rs (2 hunks)
  • flexidag/tests/tests.rs (5 hunks)
🔇 Additional comments (3)
flexidag/tests/tests.rs (3)

Line range hint 786-798: Ensure consistent difficulty settings in add_and_print_with_pruning_point

In the add_and_print_with_pruning_point function, a default difficulty is set using .with_difficulty(U256::from(10)). Verify that this difficulty level is appropriate and consistent with the rest of the test scenarios.

Please confirm that a difficulty of 10 meets the requirements for all tests that utilize this function.


1171-1173: Handle potential errors during ghost data verification

When calling dag.verify_and_ghostdata, ensure that any errors are properly handled:

let verified_ghost = dag.verify_and_ghostdata(/* parameters */)?;

Add error handling to manage verification failures gracefully, which can help identify issues during testing phases.


1245-1246: Verify consistency between ghost data before and after verification

The assertion checks if the mergeset blues before and after verification are equal:

assert_eq!(ghostdata.mergeset_blues, verified_ghostdata.mergeset_blues);

Confirm that this equality is expected. If discrepancies are possible due to the verification process, adjust the assertion to accommodate acceptable differences or investigate potential issues causing inconsistencies.

Comment on lines 184 to 185
// let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?;
let selected_parent = header.parent_hash();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Incorrect parent selection logic

The change bypasses the GHOSTDAG parent selection algorithm by directly using parent_hash instead of finding the parent with the highest blue work. This could lead to consensus issues as it violates the GHOSTDAG protocol specification.

Revert this change and use the original parent selection logic:

-        // let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?; 
-        let selected_parent = header.parent_hash();
+        let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?;
let selected_parent = header.parent_hash();
let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?;

Comment on lines 739 to 770
fn add_and_print_with_difficulty(
number: BlockNumber,
parent: Hash,
parents: Vec<Hash>,
origin: Hash,
difficulty: U256,
dag: &mut BlockDAG,
) -> anyhow::Result<BlockHeader> {
let header_builder = BlockHeaderBuilder::random();
let header = header_builder
.with_parent_hash(parent)
.with_parents_hash(parents)
.with_number(number)
.with_difficulty(difficulty)
.build();
let start = Instant::now();
// dag.commit(header.to_owned(), origin)?;
let duration = start.elapsed();
println!(
"commit header: {:?}, number: {:?}, duration: {:?}",
header.id(),
header.number(),
duration
);
// let ghostdata = dag.ghostdata(&[header.id()])?;
// let ghostdata = dag.ghostdata_by_hash(header.id())?.unwrap();
// println!(
// "add a header: {:?}, selected_parent: {:?}, blue set: {:?}, red set: {:?}, blue anticone size: {:?}",
// header, ghostdata.selected_parent, ghostdata.mergeset_blues, ghostdata.mergeset_reds, ghostdata.blues_anticone_sizes
// );
Ok(header)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uncomment the block commit to ensure blocks are added to the DAG

In the add_and_print_with_difficulty function, the line responsible for committing the block to the DAG is commented out:

// dag.commit(header.to_owned(), origin)?;

This means the block header is created but not actually committed, which could lead to inconsistent DAG state or missing blocks during tests.

Apply this diff to fix the issue:

-// dag.commit(header.to_owned(), origin)?;
+dag.commit(header.to_owned(), origin)?;

Copy link

Benchmark for 083e3d4

Click to view benchmark
Test Base PR %
accumulator_append 982.2±255.32µs 903.8±167.33µs -7.98%
block_apply/block_apply_10 411.3±23.09ms 393.6±10.75ms -4.30%
block_apply/block_apply_1000 46.6±2.79s 47.3±3.28s +1.50%
get_with_proof/db_store 50.2±7.93µs 48.0±2.31µs -4.38%
get_with_proof/mem_store 38.3±4.65µs 37.3±2.10µs -2.61%
put_and_commit/db_store/1 130.5±8.63µs 148.4±27.87µs +13.72%
put_and_commit/db_store/10 1303.2±192.85µs 1440.0±161.83µs +10.50%
put_and_commit/db_store/100 11.1±0.97ms 10.8±0.72ms -2.70%
put_and_commit/db_store/5 634.0±63.54µs 649.9±76.81µs +2.51%
put_and_commit/db_store/50 5.7±0.49ms 5.8±0.61ms +1.75%
put_and_commit/mem_store/1 76.0±8.86µs 72.7±7.81µs -4.34%
put_and_commit/mem_store/10 678.5±53.86µs 712.3±75.05µs +4.98%
put_and_commit/mem_store/100 8.3±1.60ms 8.6±1.60ms +3.61%
put_and_commit/mem_store/5 374.4±79.33µs 366.9±48.63µs -2.00%
put_and_commit/mem_store/50 3.6±0.47ms 3.5±0.39ms -2.78%
query_block/query_block_in(10)_times(100) 9.5±1.03ms 8.3±0.48ms -12.63%
query_block/query_block_in(10)_times(1000) 85.6±3.91ms 87.6±4.83ms +2.34%
query_block/query_block_in(10)_times(10000) 863.6±37.25ms 919.0±65.69ms +6.42%
query_block/query_block_in(1000)_times(100) 1462.7±149.63µs 1313.2±71.40µs -10.22%
query_block/query_block_in(1000)_times(1000) 19.3±2.61ms 14.0±0.89ms -27.46%
query_block/query_block_in(1000)_times(10000) 141.5±7.19ms 205.2±39.44ms +45.02%
storage_transaction 1023.5±337.73µs 1122.8±368.62µs +9.70%
vm/transaction_execution/1 456.8±34.85ms 472.7±32.01ms +3.48%
vm/transaction_execution/10 154.4±16.85ms 146.3±9.21ms -5.25%
vm/transaction_execution/20 144.2±15.27ms 132.3±5.17ms -8.25%
vm/transaction_execution/5 184.2±15.60ms 165.7±7.36ms -10.04%
vm/transaction_execution/50 161.7±14.40ms 182.7±41.59ms +12.99%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between e06797d and 86ee296.

📒 Files selected for processing (1)
  • flexidag/src/ghostdag/protocol.rs (4 hunks)
🔇 Additional comments (2)
flexidag/src/ghostdag/protocol.rs (2)

393-396: LGTM: Improved logging messages

The added logging messages provide better context about why blue candidate checking failed, which will help with debugging.

Also applies to: 445-445


213-235: 🛠️ Refactor suggestion

Improve blue set verification logic and error handling

The current implementation has several issues:

  1. Silently falling back to ghostdag when blue sets don't match could mask underlying problems
  2. The warning message could be more descriptive about the blue set mismatch
  3. The second verification step adds unnecessary complexity

Consider this improved implementation:

        let remote_blue_set = blue_blocks
            .iter()
            .map(|header| header.id())
            .collect::<HashSet<_>>();
-        if new_block_data
+        let calculated_blue_set = new_block_data
             .mergeset_blues
             .iter()
             .skip(1)
             .cloned()
-            .collect::<HashSet<_>>()
-            != remote_blue_set
-        {
-            warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
-            let ghostdata = self.ghostdag(&header.parents_hash())?;
-            if ghostdata
-                .mergeset_blues
-                .iter()
-                .skip(1)
-                .cloned()
-                .collect::<HashSet<_>>()
-                != remote_blue_set
-            {
-                bail!("The ghost data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), ghostdata.mergeset_blues);
-            }
-        }
+            .collect::<HashSet<_>>();
+        
+        if calculated_blue_set != remote_blue_set {
+            let missing_blues: Vec<_> = remote_blue_set.difference(&calculated_blue_set).collect();
+            let extra_blues: Vec<_> = calculated_blue_set.difference(&remote_blue_set).collect();
+            
+            bail!(
+                "Blue set mismatch for block {:?}:\nExpected blues: {:?}\nCalculated blues: {:?}\nMissing blues: {:?}\nExtra blues: {:?}",
+                header.id(),
+                remote_blue_set,
+                calculated_blue_set,
+                missing_blues,
+                extra_blues
+            );
+        }

Likely invalid or redundant comment.

@@ -181,7 +181,7 @@ impl<
!parents.is_empty(),
"genesis must be added via a call to init"
);
let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?;
let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Incorrect parent selection logic

The change bypasses the GHOSTDAG parent selection algorithm by directly using header.parents_hash() instead of finding the parent with the highest blue work. This violates the GHOSTDAG protocol specification and could lead to consensus issues.

Apply this fix:

-        let selected_parent = self.find_selected_parent(header.parents_hash().into_iter())?; 
+        let selected_parent = self.find_selected_parent(parents.iter().copied())?;

Committable suggestion skipped: line range outside the PR's diff.

Copy link

Benchmark for 257dc8f

Click to view benchmark
Test Base PR %
accumulator_append 825.3±138.74µs 945.7±313.93µs +14.59%
block_apply/block_apply_10 410.3±20.11ms 484.2±65.00ms +18.01%
block_apply/block_apply_1000 48.0±2.00s 48.5±4.05s +1.04%
get_with_proof/db_store 68.5±14.32µs 66.5±13.28µs -2.92%
get_with_proof/mem_store 40.2±4.77µs 40.5±3.84µs +0.75%
put_and_commit/db_store/1 169.2±37.98µs 151.8±22.43µs -10.28%
put_and_commit/db_store/10 1220.4±221.35µs 1221.2±169.45µs +0.07%
put_and_commit/db_store/100 12.2±1.42ms 12.1±1.42ms -0.82%
put_and_commit/db_store/5 782.2±127.32µs 614.8±41.42µs -21.40%
put_and_commit/db_store/50 6.6±1.08ms 7.4±1.49ms +12.12%
put_and_commit/mem_store/1 76.8±9.97µs 79.1±9.42µs +2.99%
put_and_commit/mem_store/10 706.6±81.57µs 730.5±116.45µs +3.38%
put_and_commit/mem_store/100 7.5±1.26ms 8.4±1.25ms +12.00%
put_and_commit/mem_store/5 388.7±66.87µs 383.4±55.63µs -1.36%
put_and_commit/mem_store/50 4.1±0.83ms 3.8±0.59ms -7.32%
query_block/query_block_in(10)_times(100) 9.3±1.04ms 11.3±1.61ms +21.51%
query_block/query_block_in(10)_times(1000) 103.1±14.61ms 104.4±14.13ms +1.26%
query_block/query_block_in(10)_times(10000) 1025.1±157.15ms 1036.8±135.28ms +1.14%
query_block/query_block_in(1000)_times(100) 1586.2±412.98µs 1597.3±220.75µs +0.70%
query_block/query_block_in(1000)_times(1000) 15.9±1.20ms 14.6±1.43ms -8.18%
query_block/query_block_in(1000)_times(10000) 148.1±7.09ms 159.1±31.01ms +7.43%
storage_transaction 1158.0±437.46µs 1105.0±369.14µs -4.58%
vm/transaction_execution/1 442.7±33.41ms 472.1±51.70ms +6.64%
vm/transaction_execution/10 139.6±7.48ms 139.3±8.87ms -0.21%
vm/transaction_execution/20 132.1±7.09ms 138.1±10.44ms +4.54%
vm/transaction_execution/5 173.3±14.87ms 177.9±12.29ms +2.65%
vm/transaction_execution/50 143.5±3.36ms 146.3±6.90ms +1.95%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (2)
flexidag/tests/tests.rs (2)

781-781: Remove commented debug logging code

The commented ghostdata logging code should either be removed or uncommented if it serves a purpose in debugging.

Apply this diff to clean up the code:

-    // let ghostdata = dag.ghostdata(&[header.id()])?;
-    // let ghostdata = dag.ghostdata_by_hash(header.id())?.unwrap();
-    // println!(
-    //     "add a header: {:?}, selected_parent: {:?}, blue set: {:?}, red set: {:?}, blue anticone size: {:?}",
-    //     header, ghostdata.selected_parent, ghostdata.mergeset_blues, ghostdata.mergeset_reds, ghostdata.blues_anticone_sizes
-    // );

Also applies to: 792-796


1107-1286: Improve code maintainability and readability

The function has several areas that could be improved:

  1. Magic numbers should be constants with descriptive names
  2. Complex block creation logic could be extracted into helper functions
  3. Repeated verification code could be refactored into a reusable function

Consider these improvements:

+const CHAIN_LENGTH: u64 = 500;
+const NORMAL_DIFFICULTY: u64 = 10;
+const HIGH_DIFFICULTY: u64 = 99999999;

+fn create_and_verify_block(
+    dag: &mut BlockDAG,
+    block: BlockHeader,
+    storage: &HashMap<Hash, BlockHeader>,
+    genesis_parent_hash: Hash,
+) -> anyhow::Result<()> {
+    let ghost = dag.ghostdata(&block.parents())?;
+    let verified_ghost = dag.verify_and_ghostdata(
+        &ghost
+            .mergeset_blues
+            .iter()
+            .skip(1)
+            .cloned()
+            .map(|x| storage.get(&x).unwrap().clone())
+            .collect::<Vec<_>>(),
+        &block,
+    )?;
+    dag.commit_trusted_block(
+        block.clone(),
+        genesis_parent_hash,
+        Arc::new(verified_ghost),
+    )
+}

This would simplify the main loop:

-    for i in 0..500 {
+    for i in 0..CHAIN_LENGTH {
         let block2 =
-            add_and_print_with_difficulty(1 + i, last.id(), vec![last.id()], U256::from(10))?;
+            add_and_print_with_difficulty(1 + i, last.id(), vec![last.id()], U256::from(NORMAL_DIFFICULTY))?;
         last = block2.clone();
         storage.insert(block2.id(), block2.clone());
-        let ghost = dag.ghostdata(&block2.parents())?;
-        let verified_ghost = dag.verify_and_ghostdata(
-            &ghost
-                .mergeset_blues
-                .iter()
-                .skip(1)
-                .cloned()
-                .map(|x| storage.get(&x).unwrap().clone())
-                .collect::<Vec<_>>(),
-            &block2,
-        )?;
-        dag.commit_trusted_block(
-            block2.clone(),
-            genesis.parent_hash(),
-            Arc::new(verified_ghost),
-        )?;
+        create_and_verify_block(&mut dag, block2, &storage, genesis.parent_hash())?;
         bottom.push(block2);
     }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 8250159 and 573bc89.

📒 Files selected for processing (3)
  • flexidag/src/blockdag.rs (2 hunks)
  • flexidag/src/ghostdag/protocol.rs (3 hunks)
  • flexidag/tests/tests.rs (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • flexidag/src/blockdag.rs
🔇 Additional comments (2)
flexidag/src/ghostdag/protocol.rs (1)

387-390: LGTM: Improved logging for failure scenarios

The added logging statements provide clear and helpful context for debugging different failure scenarios:

  1. Blue anticone exceeding k
  2. K-cluster violation detection
  3. Mergeset blues size validation

Also applies to: 397-400, 449-452

flexidag/tests/tests.rs (1)

22-23: LGTM!

The added imports are necessary for handling block difficulty in the new test functions.

Comment on lines +213 to +235
let remote_blue_set = blue_blocks
.iter()
.map(|header| header.id())
.collect::<HashSet<_>>();
if new_block_data
.mergeset_blues
.iter()
.skip(1)
.cloned()
.collect::<HashSet<_>>()
!= blue_blocks
!= remote_blue_set
{
warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
let ghostdata = self.ghostdag(&header.parents_hash())?;
if ghostdata
.mergeset_blues
.iter()
.map(|header| header.id())
.skip(1)
.cloned()
.collect::<HashSet<_>>()
{
if header.number() < 10000000 {
// no bail before 10000000
warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
} else {
bail!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
!= remote_blue_set
{
bail!("The ghost data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), ghostdata.mergeset_blues);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve blue set verification logic and error handling

The current implementation has several issues:

  1. Silently falling back to ghostdag when blue sets don't match could mask underlying problems
  2. The warning message could be more descriptive about the blue set mismatch
  3. The second verification step could mask underlying issues

Consider this improved implementation:

        let remote_blue_set = blue_blocks
            .iter()
            .map(|header| header.id())
            .collect::<HashSet<_>>();
-        if new_block_data
+        let calculated_blue_set = new_block_data
            .mergeset_blues
            .iter()
            .skip(1)
            .cloned()
-            .collect::<HashSet<_>>()
-            != remote_blue_set
-        {
-            warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
-            let ghostdata = self.ghostdag(&header.parents_hash())?;
-            if ghostdata
-                .mergeset_blues
-                .iter()
-                .skip(1)
-                .cloned()
-                .collect::<HashSet<_>>()
-                != remote_blue_set
-            {
-                bail!("The ghost data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), ghostdata.mergeset_blues);
-            }
-        }
+            .collect::<HashSet<_>>();
+        
+        if calculated_blue_set != remote_blue_set {
+            let missing_blues: Vec<_> = remote_blue_set.difference(&calculated_blue_set).collect();
+            let extra_blues: Vec<_> = calculated_blue_set.difference(&remote_blue_set).collect();
+            
+            bail!(
+                "Blue set mismatch for block {:?}:\nExpected blues: {:?}\nCalculated blues: {:?}\nMissing blues: {:?}\nExtra blues: {:?}",
+                header.id(),
+                remote_blue_set,
+                calculated_blue_set,
+                missing_blues,
+                extra_blues
+            );
+        }

This change:

  1. Provides more detailed error messages showing the exact differences
  2. Removes the redundant second verification step
  3. Fails fast when a mismatch is detected
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let remote_blue_set = blue_blocks
.iter()
.map(|header| header.id())
.collect::<HashSet<_>>();
if new_block_data
.mergeset_blues
.iter()
.skip(1)
.cloned()
.collect::<HashSet<_>>()
!= blue_blocks
!= remote_blue_set
{
warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
let ghostdata = self.ghostdag(&header.parents_hash())?;
if ghostdata
.mergeset_blues
.iter()
.map(|header| header.id())
.skip(1)
.cloned()
.collect::<HashSet<_>>()
{
if header.number() < 10000000 {
// no bail before 10000000
warn!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
} else {
bail!("The data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), new_block_data.mergeset_blues);
!= remote_blue_set
{
bail!("The ghost data of blue set is not equal when executing the block: {:?}, for {:?}, checking data: {:?}", header.id(), blue_blocks.iter().map(|header| header.id()).collect::<Vec<_>>(), ghostdata.mergeset_blues);
let remote_blue_set = blue_blocks
.iter()
.map(|header| header.id())
.collect::<HashSet<_>>();
let calculated_blue_set = new_block_data
.mergeset_blues
.iter()
.skip(1)
.cloned()
.collect::<HashSet<_>>();
if calculated_blue_set != remote_blue_set {
let missing_blues: Vec<_> = remote_blue_set.difference(&calculated_blue_set).collect();
let extra_blues: Vec<_> = calculated_blue_set.difference(&remote_blue_set).collect();
bail!(
"Blue set mismatch for block {:?}:\nExpected blues: {:?}\nCalculated blues: {:?}\nMissing blues: {:?}\nExtra blues: {:?}",
header.id(),
remote_blue_set,
calculated_blue_set,
missing_blues,
extra_blues
);
}

Comment on lines +1101 to +1105
#[test]
fn test_verification_blue_block_inconsistent() -> anyhow::Result<()> {
loop_to_blue()?;
anyhow::Result::Ok(())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance test coverage with assertions

The test function is a simple wrapper around loop_to_blue() without any assertions or verification of the test results. Consider adding specific test cases and assertions to verify the expected behavior.

Example improvement:

 #[test]
 fn test_verification_blue_block_inconsistent() -> anyhow::Result<()> {
-    loop_to_blue()?;
+    let (dag, last_block) = loop_to_blue()?;
+    
+    // Add assertions to verify the expected state
+    let ghost_data = dag.ghostdata(&[last_block.id()])?;
+    assert!(ghost_data.blue_score > 0, "Blue score should be positive");
+    assert!(!ghost_data.mergeset_blues.is_empty(), "Blue set should not be empty");
+    
     anyhow::Result::Ok(())
 }

Committable suggestion skipped: line range outside the PR's diff.

Copy link

Benchmark for 2d9ad34

Click to view benchmark
Test Base PR %
accumulator_append 841.1±119.43µs 982.7±194.43µs +16.84%
block_apply/block_apply_10 378.1±8.62ms 509.0±114.65ms +34.62%
block_apply/block_apply_1000 43.2±1.88s 48.5±2.49s +12.27%
get_with_proof/db_store 46.1±2.60µs 63.8±13.68µs +38.39%
get_with_proof/mem_store 36.7±1.26µs 40.2±5.37µs +9.54%
put_and_commit/db_store/1 148.7±23.48µs 177.0±34.32µs +19.03%
put_and_commit/db_store/10 1629.4±424.88µs 1326.7±187.95µs -18.58%
put_and_commit/db_store/100 12.3±1.99ms 13.8±1.97ms +12.20%
put_and_commit/db_store/5 612.0±91.66µs 615.1±67.47µs +0.51%
put_and_commit/db_store/50 5.3±0.50ms 6.4±1.08ms +20.75%
put_and_commit/mem_store/1 77.9±13.15µs 82.9±13.52µs +6.42%
put_and_commit/mem_store/10 695.2±97.64µs 710.2±76.67µs +2.16%
put_and_commit/mem_store/100 6.6±0.39ms 7.8±0.94ms +18.18%
put_and_commit/mem_store/5 376.1±67.30µs 390.4±77.07µs +3.80%
put_and_commit/mem_store/50 3.6±0.47ms 4.1±0.74ms +13.89%
query_block/query_block_in(10)_times(100) 8.7±0.62ms 12.0±3.44ms +37.93%
query_block/query_block_in(10)_times(1000) 86.6±5.23ms 93.0±8.42ms +7.39%
query_block/query_block_in(10)_times(10000) 850.2±33.29ms 1067.3±197.15ms +25.54%
query_block/query_block_in(1000)_times(100) 1264.6±57.82µs 1605.3±273.81µs +26.94%
query_block/query_block_in(1000)_times(1000) 12.3±0.27ms 15.6±1.34ms +26.83%
query_block/query_block_in(1000)_times(10000) 145.0±22.24ms 144.1±17.57ms -0.62%
storage_transaction 1051.6±335.79µs 1334.5±542.30µs +26.90%
vm/transaction_execution/1 477.1±60.72ms 455.2±32.68ms -4.59%
vm/transaction_execution/10 135.4±7.27ms 156.3±17.11ms +15.44%
vm/transaction_execution/20 133.9±10.70ms 139.2±13.96ms +3.96%
vm/transaction_execution/5 175.0±11.27ms 179.3±8.31ms +2.46%
vm/transaction_execution/50 142.3±4.73ms 187.4±17.33ms +31.69%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
flexidag/tests/tests.rs (2)

823-823: Remove commented out debug logging code.

The function contains commented out debug logging code that should be removed if no longer needed.

Apply this diff to clean up:

     let header = header_builder
         .with_parent_hash(parent)
         .with_parents_hash(parents)
         .with_number(number)
         .with_pruning_point(pruning_point)
         .with_difficulty(U256::from(10))
         .build();
     let start = Instant::now();
     dag.commit(header.to_owned(), origin)?;
     let duration = start.elapsed();
     println!(
         "commit header: {:?}, number: {:?}, duration: {:?}",
         header.id(),
         header.number(),
         duration
     );
-    // let ghostdata = dag.ghostdata(&[header.id()])?;
-    // let ghostdata = dag.ghostdata_by_hash(header.id())?.unwrap();
-    // println!(
-    //     "add a header: {:?}, selected_parent: {:?}, blue set: {:?}, red set: {:?}, blue anticone size: {:?}",
-    //     header, ghostdata.selected_parent, ghostdata.mergeset_blues, ghostdata.mergeset_reds, ghostdata.blues_anticone_sizes
-    // );
     Ok(header)

Also applies to: 834-838


1143-1328: Enhance test assertions for better verification coverage.

While the test successfully verifies ghost data consistency, it could benefit from additional assertions to:

  1. Verify the blue block inconsistency conditions
  2. Check intermediate states during the block creation loop
  3. Validate the expected behavior when inconsistency is detected

Consider adding assertions like:

// Verify blue block properties
assert!(ghost.blue_score > 0, "Blue score should be positive");
assert!(!ghost.mergeset_blues.is_empty(), "Blue set should not be empty");

// Verify inconsistency detection
assert!(verified_ghost.mergeset_blues.len() < ghost.mergeset_blues.len(), 
    "Verification should reduce blue set on inconsistency");
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 573bc89 and e3dca70.

📒 Files selected for processing (2)
  • flexidag/src/blockdag.rs (2 hunks)
  • flexidag/tests/tests.rs (7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • flexidag/src/blockdag.rs
🔇 Additional comments (2)
flexidag/tests/tests.rs (2)

22-23: LGTM!

The new imports are correctly placed and necessary for the added functionality.


446-456: LGTM! Test coverage improved.

The changes enhance the test coverage by:

  1. Simplifying block addition with empty iterators
  2. Adding new assertions to verify ancestor relationships for selected_parent_uncle1, uncle1, and uncle2

The test logic remains robust and comprehensive.

Also applies to: 470-470, 475-494, 500-500, 508-519

Comment on lines +784 to +807
fn add_and_print_with_difficulty(
number: BlockNumber,
parent: Hash,
parents: Vec<Hash>,
difficulty: U256,
) -> anyhow::Result<BlockHeader> {
let header_builder = BlockHeaderBuilder::random();
let header = header_builder
.with_parent_hash(parent)
.with_parents_hash(parents)
.with_number(number)
.with_difficulty(difficulty)
.build();
let start = Instant::now();
let duration = start.elapsed();
println!(
"commit header: {:?}, number: {:?}, duration: {:?}",
header.id(),
header.number(),
duration
);

Ok(header)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix timing measurement or remove unused timing code.

The function includes timing measurement code that doesn't measure any meaningful operation:

let start = Instant::now();
let duration = start.elapsed();

Since no significant operation occurs between start and elapsed(), this timing is not useful.

Apply this diff to fix:

 fn add_and_print_with_difficulty(
     number: BlockNumber,
     parent: Hash,
     parents: Vec<Hash>,
     difficulty: U256,
 ) -> anyhow::Result<BlockHeader> {
     let header_builder = BlockHeaderBuilder::random();
     let header = header_builder
         .with_parent_hash(parent)
         .with_parents_hash(parents)
         .with_number(number)
         .with_difficulty(difficulty)
         .build();
-    let start = Instant::now();
-    let duration = start.elapsed();
     println!(
         "commit header: {:?}, number: {:?}, duration: {:?}",
         header.id(),
         header.number(),
-        duration
+        Duration::from_secs(0)
     );
     Ok(header)
 }

Copy link

Benchmark for 20c726a

Click to view benchmark
Test Base PR %
accumulator_append 934.0±202.40µs 1184.1±298.97µs +26.78%
block_apply/block_apply_10 411.7±45.62ms 438.1±46.18ms +6.41%
block_apply/block_apply_1000 47.1±2.29s 49.2±3.50s +4.46%
get_with_proof/db_store 45.8±1.38µs 48.4±4.65µs +5.68%
get_with_proof/mem_store 36.8±1.34µs 40.4±3.34µs +9.78%
put_and_commit/db_store/1 131.3±8.36µs 180.9±31.99µs +37.78%
put_and_commit/db_store/10 1319.7±161.17µs 1354.4±269.34µs +2.63%
put_and_commit/db_store/100 12.2±2.11ms 12.0±1.81ms -1.64%
put_and_commit/db_store/5 797.0±164.62µs 632.0±65.49µs -20.70%
put_and_commit/db_store/50 6.7±1.35ms 6.8±1.07ms +1.49%
put_and_commit/mem_store/1 74.2±11.75µs 84.3±7.37µs +13.61%
put_and_commit/mem_store/10 752.8±138.40µs 838.1±100.18µs +11.33%
put_and_commit/mem_store/100 7.6±1.04ms 8.9±1.64ms +17.11%
put_and_commit/mem_store/5 372.7±69.85µs 343.7±43.24µs -7.78%
put_and_commit/mem_store/50 4.3±0.88ms 3.5±0.35ms -18.60%
query_block/query_block_in(10)_times(100) 9.2±0.95ms 8.7±0.35ms -5.43%
query_block/query_block_in(10)_times(1000) 88.7±3.65ms 108.6±13.19ms +22.44%
query_block/query_block_in(10)_times(10000) 1031.0±138.63ms 959.0±104.80ms -6.98%
query_block/query_block_in(1000)_times(100) 1673.6±174.34µs 1369.5±86.15µs -18.17%
query_block/query_block_in(1000)_times(1000) 12.9±0.84ms 18.9±3.00ms +46.51%
query_block/query_block_in(1000)_times(10000) 140.0±12.90ms 137.9±7.83ms -1.50%
storage_transaction 1210.3±366.37µs 1127.3±464.67µs -6.86%
vm/transaction_execution/1 415.8±11.60ms 473.3±37.56ms +13.83%
vm/transaction_execution/10 141.5±10.28ms 143.8±12.52ms +1.63%
vm/transaction_execution/20 137.3±9.09ms 140.1±10.65ms +2.04%
vm/transaction_execution/5 185.0±13.99ms 174.7±7.29ms -5.57%
vm/transaction_execution/50 157.2±20.04ms 171.8±18.78ms +9.29%

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
flexidag/tests/tests.rs (2)

834-835: Clean up commented debug prints.

Instead of leaving debug prints commented out, consider either:

  1. Removing them completely if they're no longer needed
  2. Uncommenting them if they provide valuable debugging information
  3. Moving them behind a debug flag if they're occasionally useful

This helps maintain cleaner and more maintainable code.

Also applies to: 837-838


1149-1328: Consider refactoring for better maintainability.

The function could benefit from several improvements:

  1. Extract repeated block creation and verification logic into helper methods
  2. Encapsulate the storage HashMap in a dedicated test helper struct
  3. Break down the complex DAG construction into smaller, focused methods
  4. Add comments explaining the purpose of each section

This would make the test more maintainable and easier to understand.

Example refactoring:

struct TestDagBuilder {
    storage: HashMap<Hash, BlockHeader>,
    dag: BlockDAG,
}

impl TestDagBuilder {
    fn new(k: u64) -> Self {
        // Implementation
    }

    fn create_and_verify_block(&mut self, number: u64, parent: Hash, parents: Vec<Hash>, difficulty: U256) -> anyhow::Result<BlockHeader> {
        // Implementation
    }

    fn build_bottom_chain(&mut self, count: u64, start_block: &BlockHeader) -> anyhow::Result<Vec<BlockHeader>> {
        // Implementation
    }

    fn build_top_chain(&mut self, bottom_blocks: Vec<BlockHeader>) -> anyhow::Result<BlockHeader> {
        // Implementation
    }
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 248b7fb and e4eff6c.

📒 Files selected for processing (1)
  • flexidag/tests/tests.rs (7 hunks)
🔇 Additional comments (4)
flexidag/tests/tests.rs (4)

22-23: LGTM!

The new imports are necessary for handling consensus headers and difficulty values in the new test functions.


784-807: Remove ineffective timing measurement code.

The timing measurement is ineffective as it measures an empty duration between start and elapsed(). No significant operation occurs in this interval.

Apply this diff to fix:

-    let start = Instant::now();
-    let duration = start.elapsed();
     println!(
         "commit header: {:?}, number: {:?}, duration: {:?}",
         header.id(),
         header.number(),
-        duration
+        Duration::from_secs(0)
     );

823-823: LGTM!

Adding a default difficulty value is consistent with the new difficulty-aware testing approach.


1143-1147: Enhance test coverage with assertions and documentation.

The test function lacks explicit assertions to verify the expected behavior. Consider:

  1. Adding assertions to verify specific behaviors
  2. Documenting the test's purpose and expected outcomes
  3. Breaking down the complex DAG construction into smaller, well-documented steps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant