-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: blobs. #9302
base: master
Are you sure you want to change the base?
feat: blobs. #9302
Conversation
l1-contracts/src/core/Rollup.sol
Outdated
require( | ||
_flags.ignoreDA || _header.contentCommitment.txsEffectsHash == _txsEffectsHash, | ||
Errors.Rollup__UnavailableTxs(_header.contentCommitment.txsEffectsHash) | ||
_flags.ignoreDA || 1 == 1, // _header.blobHash == _blobHash, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ TMNT team: here we need to check the data has been published. I could add the EVM's blobHash to the block header (we can calculate it in ts in advance) and uncomment the check?
However it could be doable without adding more fields to the header by having some isAvailable
mapping filled once _validateBlob
is called. Not sure which is preferable based on the team's priorities!
offset += 1; | ||
|
||
// TX FEE | ||
tx_effects_hash_input[offset] = transaction_fee; | ||
// TODO(Miranda): how many bytes do we expect tx fee to be? Using 29 for now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ TMNT (or any) team: is there a restriction anywhere on the size of the tx fee?
Changes to circuit sizes
🧾 Summary (100% most significant diffs)
Full diff report 👇
|
This comment was marked as off-topic.
This comment was marked as off-topic.
authors = [""] | ||
compiler_version = ">=0.30.0" | ||
|
||
[dependencies] | ||
bigint = {tag = "v0.3.4", git = "https://github.com/noir-lang/noir-bignum" } | ||
bigint = {tag = "v0.4.0", git = "https://github.com/noir-lang/noir-bignum" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bigint = {tag = "v0.4.0", git = "https://github.com/noir-lang/noir-bignum" } | |
bigint = {tag = "v0.4.1", git = "https://github.com/noir-lang/noir-bignum" } |
v0.4.1 should have much better brillig output.
The Blobbening
It's happening and I can only apologise.
Follows #8955.
Intro
More detailed stuff below, but the major changes are:
SpongeBlob
)Major Issues
Things that we should resolve before merging:
Run times massively increased:
nr
code forblob
s is written with the BigNum lib, which uses a lot of unconstrained code then a small amount of constrained code to verify results. Unfortunately this means we cannot simulate circuits containing blobs (currentlyblock-root
) using wasm or setnr
tests tounconstrained
because that causes astack overflow
in brillig.unconstrained
(meaningrollup-lib
tests take 10mins or so to run) and I've forced circuit simulation to run in native ACVM rather than wasm (adding around 1min to any tests that simulateblock-root
).nr
code would only cause more runtime issues.Data retrievalThe below will be done in feat: Integrate beacon chain client/web2 blob getter #9101, and for now we use calldata just to keep the archiver working:data-retrieval
to use.Blob verification precompile gasBatching blob KZG proofs is being thought about (see Epic: Blobs #8955 for progression):nr
, so we can call the precompile once per epoch rather than 3 times per block.General TODOs
Things I'm working on:
Description
The general maths in nr and replicated across
foundation/blob
is described here.Old DA Flow
From the base rollup to L1, the previous flow for publishing DA was:
Nr:
base
rollup, take in all tx effects we wish to publish andsha256
hash them to a single value:tx_effects_hash
merge
(orblock-root
) circuitmerge
orblock-root
circuit simplysha256
hashes each 'child'tx_effects_hash
from its left and right inputsblock-root
, we have one value:txs_effects_hash
which becomes part of the header's content commitmentTs:
txs_effects_hash
is checked and propogated through the orchestrator and becomes part of the ts classL2Block
in the headerL2Block
's.body
publisher
sends the serialised blockbody
andheader
to the L1 blockpropose
functionSol:
propose
, we decode the blockbody
andheader
body
is deconstructed per tx into its tx effects and then hashed usingsha256
, until we haveN
tx_effects_hash
es (mimicing the calculation in thebase
rollup)tx_effects_hash
is then input as leaves to a wonky tree and hashed up to the root (mimicing the calculation frombase
toblock-root
), forming the finaltxs_effects_hash
*NB: With batch rollups, I've lost touch with what currently happens at verification and how we ensure the
txs_effects_hash
matches the one calculated in the rollup, so this might not be accurate.New DA Flow
The new flow for publishing DA is:
Nr:
base
rollup, we treat tx effects as we treatPartialStateReference
s - injecting a hint to thestart
andend
state we expect from processing thisbase
's transactionabsorb
them into the givenstart
SpongeBlob
state. We then check the result is the same as the givenend
statePartialStateReference
s, eachmerge
orblock-root
checks that the left input'send
blob state is equal to the right input'sstart
blob stateblock-root
, we check the above and that the left'sstart
blob state was empty. Now we have a sponge which has absorbed, as a flat array, all the tx effects in the block we wish to publishbase
)z
by hashing this ^ hash with the blob commitmentz
using the flat array of effects in the barycentric formula (more details on the engineering design link above), to returny
block-root
adds this triple (z
,y
, and commitmentC
) to a new array ofBlobPublicInputs
fees
, eachblock-merge
androot
merges the left and right input arrays, so we end up with an array of each block's blob info*NB: this will likely change to accumulating to a single set of values, rather than one per block, and is being worked on by Mike. The above also describes what happens for one blob per block for simplicity (it will actually be 3).
Ts:
BlobPublicInputs
are checked against the ts calculated blob for each block in the orchestratorblobInput
(plus the expected L1blobHash
and a ts generated KZG proof) sent to L1 to thepropose
functionpropose
transaction is now a special 'blob transaction' where all the tx effects (the same flat array as dealt with in the rollup) are sent as a sidecarbody
, so the archiver can still read the data back until feat: Integrate beacon chain client/web2 blob getter #9101*NB: this will change once we can read the blobs themselves from the beacon chain/some web2 client.
Sol:
propose
, instead of recalcating thetxs_effects_hash
, we send theblobInput
to a newvalidateBlob
function. *This function:blobHash
from the EVM and checks it against the one inblobInput
z
,y
, andC
indeed correspond to the blob we claimblobInput
, but still need to link this to our rollup circuit:BlobPublicInputs
is extracted from the bytes array and stored against its block numberroot
proof is verified, we reconstruct the array ofBlobPublicInputs
from the above stored values and use them in proof verificationBlobPublicInputs
are incorrect (equivalently, if any of the published blobs were incorrect), the proof verification will failblobHash
is been added toBlockLog
once it has been verified by the precompile*NB: As above, we will eventually call the precompile just once for many blobs with one set of
BlobPublicInputs
. This will still be used in verifying theroot
proof to ensure the tx effects match those from eachbase
.