-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(commitment): Commitment component #1024
Conversation
core/lib/dal/migrations/20240205114501_drop-l1_batches-unneeded-columns.down.sql
Outdated
Show resolved
Hide resolved
core/lib/dal/migrations/20240205114501_drop-l1_batches-unneeded-columns.up.sql
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will try to perform more in-depth review tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
core/lib/dal/migrations/20240126114931_add-l1_batches-commitment-index.up.sql
Show resolved
Hide resolved
519ad73
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really good! Optimally, there should be a couple of unit tests for the main component, but integration tests give some level of assurance that it works.
Created a task, will address soon |
6ac51c5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move it out? To some new library?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally yes, but I didn't take into in a scope of this PR
let events_queue_commitment = events_commitment_task | ||
.await | ||
.context("`events_commitment_task` failed")??; | ||
let bootloader_initial_content_commitment = bootloader_memory_commitment_task | ||
.await | ||
.context("`bootloader_memory_commitment_task` failed")??; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to spawn them in parallel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, events commitment take around 500ms on mainnet at the moment and bootloader memory around 2s, so we 500ms by running in parallel
🤖 I have created a release *beep* *boop* --- ## [20.7.0](core-v20.6.0...core-v20.7.0) (2024-02-16) ### Features * Add input field to CallRequest ([#1069](#1069)) ([5087121](5087121)) * **api:** Remove unused and obsolete token info ([#1071](#1071)) ([e920897](e920897)) * Better errors for JsonRPC calls ([#1002](#1002)) ([079f999](079f999)) * **commitment:** Commitment component ([#1024](#1024)) ([60305ba](60305ba)) * **en:** Make snapshots applier resilient and process storage log chunks in parallel ([#1036](#1036)) ([805218c](805218c)) * **node_framework:** Resources and layers for ETH clients ([#1074](#1074)) ([776337a](776337a)) * **node_framework:** Support StateKeeper in the framework ([#1043](#1043)) ([a80fff2](a80fff2)) ### Bug Fixes * **api:** Return on duplicate earlier ([#1059](#1059)) ([cfa5701](cfa5701)) * **contract-verifier:** Use optimizer mode in solidity-single-file verification ([#1079](#1079)) ([fdab638](fdab638)) * Token distribution ([#1051](#1051)) ([bd63b3a](bd63b3a)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
What ❔
l1_batches
columns:parent_hash
,l2_l1_compressed_messages
,compressed_write_logs
,compressed_contracts
.factory_deps
ofL1BatchWithMetadata
, it's renamed toraw_published_factory_deps
, now it represents all published bytecodes for pre-boojum batches and published uncompressed bytecodes for post-boojum batches (compressed bytecodes are published as a part of long L2 to L1 messages).Why ❔
Checklist
zk fmt
andzk lint
.zk spellcheck
.zk linkcheck
.