-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: block.timestamp is not accurate #3398
base: main
Are you sure you want to change the base?
fix: block.timestamp is not accurate #3398
Conversation
@@ -89,6 +90,7 @@ impl ZkSyncStateKeeper { | |||
sealer, | |||
storage_factory, | |||
health_updater: ReactiveHealthCheck::new("state_keeper").1, | |||
should_create_l2_block: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if we should persist this in rocksdb to prevent issues at restart?
@slowli PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it could be worth exploring a slightly differing approach: decoupling setting a new block in the UpdatesManager
and in BatchExecutor
. Namely, as soon as a new block is added in the current workflow, it is still added in UpdatesManager
, but is not sent to BatchExecutor
. Instead, it is only sent to BatchExecutor
after receiving the first transaction in the block with the updated timestamp (obviously, the timestamp needs to be updated in UpdatesManager
as well). IMO, this would make it slightly easier to reason about correctness.
tracing::debug!( | ||
"L2 block #{} (L1 batch #{}) should be sealed as per sealing rules", | ||
updates_manager.l2_block.number, | ||
updates_manager.l1_batch.number | ||
); | ||
self.seal_l2_block(updates_manager).await?; | ||
self.should_create_l2_block = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand the purpose of this variable. AFAIU, the logic here conceptually should change as follows:
- After sealing the block, do not start the next block immediately; instead, set a local var whether to start it.
- Wait for the next transaction.
- After receiving a transaction, if the flag is set, start a new block and unset the flag.
The logic here almost follows this flow, but the flag is non-local, which complicates reasoning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, this could totally be the flag local!
The reason I have it global is because I need to know the state of the last fictive block when closing the batch in the parent loop (parent function).
-
Case 1 : The last block has not been sealed. This is the original behavior before the PR change because we are always creating new unsealed block right after sealing one wether we receive new transaction or not. This is why we are "always" sealing the last block in the parent loop before closing the batch.
-
Case 2: The last block has been sealed, but no transaction has been received for some period of time and ultimately we are "forced" to close the batch. In that case we are in a weird state where the last block has been sealed but no new block has started and we should not seal the last block again in the parent loop
It seems a bit hacky indeed but it was the best way I found in order to not introduce too much change in this PR.
Perhaps I can completely remove the sealing logic in the parent loop so that we won't need global flag and turn this into a local flag? This could be much easier to understand and yes, the flow you are describing is exactly what the PR is trying to do.
if !self.should_create_l2_block { | ||
// l2 block has been already sealed | ||
self.seal_l2_block(&updates_manager).await?; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This place is confusing with the proposed changes. The check above checks whether the latest block contains any transactions. AFAIU, if should_create_l2_block
is true, then the check doesn't concern the latest block, but rather the previous one; the latest block isn't really started yet. So, a fictive block must be started in any case. IIUC, the current approach technically works because the previous block exists and is non-empty, but it looks hacky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment below.
What ❔
Related to zkSync-Community-Hub/zksync-developers#820
Change the l2 block creation logic to start a new l2 block only when a transaction is ready to be executed.
Why ❔
Current logic start a new l2 block as soon as the previous one is sealed.
A contract that relies on
block.timestamp
would be able to predict the time correctly because if the l2 block goes stale (no transaction), then it will be open indefinitely and the timestamp will not be accurate anymoreSolution has been tested locally but any feedbacks would be appreciated
Checklist
zkstack dev fmt
andzkstack dev lint
.