Skip to content

Conversation

@frisitano
Copy link
Collaborator

No description provided.

@codspeed-hq
Copy link

codspeed-hq bot commented Nov 4, 2025

CodSpeed Performance Report

Merging #409 will not alter performance

Comparing feat/l1-reorg (4327fdf) with main (233845d)

Summary

✅ 2 untouched

Copy link
Collaborator Author

@frisitano frisitano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments inline

Comment on lines 52 to 53
/// The finalized block info after finalizing the consolidated batches.
finalized_block_info: Option<BlockInfo>,
Copy link
Collaborator Author

@frisitano frisitano Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I question whether this is needed here. I must consider this case in more depth but I believe this will always be none.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in practice this will never be needed. The reason is that we always wait for the BatchFinalized event to be finalized. As such, there should never be an impact on the finalized head from a BatchFinalized event.

async fn handle_l1_finalized(
&mut self,
block_number: u64,
block_info: BlockInfo,
Copy link
Collaborator Author

@frisitano frisitano Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears that we only ever use the finalized block number and not the hash. We should evaluate to see if we can simplify this by just using the block number.

batch_info: BatchInfo::new(batch_clone.index, batch_clone.hash),
l1_block_number: batch_clone.block_number,
safe_head: new_safe_head,
batch_info: BatchInfo::new(batch.index, batch.hash),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use (&batch).into()

}

impl core::str::FromStr for BatchStatus {
type Err = ();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an error type

Comment on lines 61 to 62
head: BlockInfo,
finalized: BlockInfo,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need BlockInfo in this context?

// remove the L1 block infos greater than the provided l1 block number
self.remove_l1_block_info_gt(l1_block_number).await?;

// delete batch commits, l1 messages and batch finalization effects greater than the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we configure the tables with ON DELETE CASCADE on a foreign key relationship we could minimize the code necessary here and let the DB take care of it once L1 blocks are deleted. While this introduces implicit behavior it should also guarantee that the DB is in a consistent state and reduce coding errors by forgetting to clean up.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had considered this, and there are a few considerations here:

  • We have to maintain all L1 block info for every event, even after they are finalized, maybe not a huge deal (I estimate this would result in approximately 0.5GB additional storage requirement on mainnet)
  • Using ON DELETE CASCADE works in some cases, for example, we use this to delete safe L2 blocks when batches are deleted. However, if we take the status field in the batch commit table, this is a bit more complex to handle using cascade logic. We may be able to combine cascade and database triggers to achieve the desired outcome.

I tend to agree with the general sentiment that delegating this to database rules is more robust. However, to be pragmatic, I would propose that we leave as is in this PR and open an issue and tackle this in a follow-up PR. However, we may need to consider this before a production release as foreign relationships / cascade logic is immutable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah let's do that in a separate PR. We don't necessarily need to do this before production release. Worst case, we can write a migration that copies old data to new tables and then deletes the old ones for deep structural DB changes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed

@frisitano frisitano requested a review from jonastheis November 12, 2025 06:52
jonastheis
jonastheis previously approved these changes Nov 12, 2025
.await?;

models::batch_commit::Entity::update_many()
.filter(models::batch_commit::Column::Hash.is_in(batch_hashes.iter().cloned()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the batch status also be set to Reverted here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants