-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Prepare protocol circuits for batch rollup #7727
Conversation
end_global_variables: GlobalVariables, // Global variables for the last block in the range | ||
out_hash: Field, // Merkle node of the L2-to-L1 messages merkle roots in the block range | ||
fees: [FeeRecipient; 32], // Concatenation of all coinbase and fees for the block range | ||
vk_tree_root: Field, // Root of allowed vk tree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not mentioned in engineering-designs doc - added the vk tree root to check for allowed circuits in future.
BlockRootOrBlockMergePublicInputs { | ||
previous_archive: left.constants.last_archive, // archive before this block was added | ||
new_archive: archive, // archive once this block was added | ||
previous_block_hash: self.previous_block_hash, | ||
end_block_hash: block_hash, // current newest block hash = this block hash | ||
start_global_variables: left.constants.global_variables, // we have asserted that left.constants == right.constants => ... | ||
end_global_variables: left.constants.global_variables, // with a current block range of 1, we only have 1 set of constants | ||
out_hash: content_commitment.out_hash, | ||
fees: fee_arr, | ||
vk_tree_root: left.constants.vk_tree_root | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is mostly just a renamed version of current root_rollup_inputs
, with these outputs as the main change.
#[test(should_fail_with="input proofs have different constants")] | ||
fn constants_different_fails() { | ||
let mut inputs = default_merge_rollup_inputs(); | ||
inputs.previous_rollup_data[0].base_or_merge_rollup_public_inputs.constants.global_variables.chain_id = 1; | ||
inputs.previous_rollup_data[1].base_or_merge_rollup_public_inputs.constants.global_variables.chain_id = 0; | ||
let _output = inputs.merge_rollup_circuit(); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed this as it is a duplicate of the above test (cleaning up)
// TODO(Miranda): remove? This appears to be unused | ||
// Returns the hash truncated to one field element |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed this code as it's unused and has been since I added this comment (March?)
inputs.previous_rollup_data = default_previous_rollup_data(); | ||
|
||
inputs | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is just the old root_rollup_inputs
renamed - the below tests/root_rollup_inputs
is a new file.
public async blockRootRollupCircuit(input: BlockRootRollupInputs): Promise<BlockRootOrBlockMergePublicInputs> { | ||
const witnessMap = convertBlockRootRollupInputsToWitnessMap(input); | ||
|
||
const witness = await this.wasmSimulator.simulateCircuit( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsure whether the new circuits should use this.wasmSimulator
or this.simulationProvider
? Used wasmSimulator
to match with merge
for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given block root rollup is similar to the old root rollup, and that one used wasm, I'd stick with that one. Eventually we should build a NAPI/FFI interface for the simulator as well as for bb...
Changes to circuit sizes
🧾 Summary (100% most significant diffs)
Full diff report 👇
|
Benchmark resultsMetrics with a significant change:
Detailed resultsAll benchmarks are run on txs on the This benchmark source data is available in JSON format on S3 here. Proof generationEach column represents the number of threads used in proof generation.
L2 block published to L1Each column represents the number of txs on an L2 block published to L1.
L2 chain processingEach column represents the number of blocks on the L2 chain where each block has 8 txs.
Circuits statsStats on running time and I/O sizes collected for every kernel circuit run across all benchmarks.
Stats on running time collected for app circuits
AVM SimulationTime to simulate various public functions in the AVM.
Public DB AccessTime to access various public DBs.
Tree insertion statsThe duration to insert a fixed batch of leaves into each tree type.
MiscellaneousTransaction sizes based on how many contract classes are registered in the tx.
Transaction size based on fee payment method | Metric | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!!
(self.start_global_variables.eq(other.start_global_variables)) & | ||
(self.end_global_variables.eq(other.end_global_variables)) & | ||
(self.out_hash == other.out_hash) & | ||
(self.fees.eq(other.fees)) & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't know that Noir handled array equality, nice!
public async blockRootRollupCircuit(input: BlockRootRollupInputs): Promise<BlockRootOrBlockMergePublicInputs> { | ||
const witnessMap = convertBlockRootRollupInputsToWitnessMap(input); | ||
|
||
const witness = await this.wasmSimulator.simulateCircuit( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given block root rollup is similar to the old root rollup, and that one used wasm, I'd stick with that one. Eventually we should build a NAPI/FFI interface for the simulator as well as for bb...
l1-contracts/src/core/Rollup.sol
Outdated
@@ -253,6 +304,162 @@ contract Rollup is Leonidas, IRollup { | |||
|
|||
emit L2ProofVerified(header.globalVariables.blockNumber, _proverId); | |||
} | |||
// TODO(#7346): Commented out for now as stack too deep (unused until batch rollups integrated anyway). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a first impl of verifyRootProof
for future use, but couldn't compile the contract due to stack too deep error. We don't currently use it and don't have the full capabilities in ts/sol to use it anyway, so I commented it out. I can also remove it and add back in future so we have cleaner code!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How long do you expect it to be like this? I don't really like having a big bunch of uncommented code as it often end up just being distracting when reading through the code or something that is broken when one finally tries to un-comment it because things have changes and it have been standing still 🤷
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree - not sure on when. We will use it when batch rollups are fully integrated (the main part of the work will be editing the sequencer/orchestrator code, which is under Phil's team) after provernet is up. The logic follows from the other verify function pretty clearly so can remove for now for cleanliness.
l1-contracts/src/core/Rollup.sol
Outdated
// TODO(#7346): Currently previous block hash is unchecked, but will be checked in batch rollup (block merge -> root). | ||
// block-building-helpers.ts is injecting as 0 for now, replicating here. | ||
// previous_block_hash: the block hash just preceding this block (will eventually become the end_block_hash of the prev batch) | ||
publicInputs[4] = bytes32(0); | ||
|
||
// TODO(#7346): Move archive membership proof to contract? | ||
// verifyMembership(archivePath, _previousBlockHash, header.globalVariables.blockNumber - 1, expectedLastArchive) | ||
|
||
// end_block_hash: the current block hash (will eventually become the hash of the final block proven in a batch) | ||
publicInputs[5] = _currentBlockHash; | ||
|
||
// TODO(#7346): Move archive membership proof to contract? | ||
// Currently archive root is updated by adding the new block hash inside block-root circuit. | ||
// verifyMembership(archivePath, _currentBlockHash, header.globalVariables.blockNumber, expectedArchive) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty messy (sorry) for a few reasons:
- As commented in a few places in the code, it would be a big change to extract and input the
previousBlockHash
for a block-root circuit (it's not used now, but required for batch rollups, but the prover doesn't 'know' yet about blocks that came before it AFAIK). For now, I've set it to 0 and it originates inblock-building-helpers
. When merge-root is used, it will check thatleft.end_block_hash == right.previous_block_hash
. The very firstprevious_block_hash
will be bubbled up to the final root where it will be checked on L1, but maybe the other checks (archive, block numbers, etc.) are sufficient? - The plan (as in this doc) is to move archive membership to L1. In this PR, membership is assured as block hashes are added to the (pedersen) archive tree in the block-root circuit to find the new archive root, so for now adding a membership check would be an unnecessary and high gas cost change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are currently storing the archive
in the BlockLog
to support non-sequential proving, and as we needed it as input to the proof validation anyway.
If we are going the "TxObjects" direction (meeting Thursday) we won't have the archive at that point, and it will instead make sense for us to store the BlockHash
as we need something to sign over for the committee anyway.
With that change, you don't need the membership check here as it would simply be reading the values and checking directly against those instead.
Shortterm, I think a fine approach to get the logic down would be to extend BlockLog
with the hash, and then you can simple read those or perform the check similarly to the archive checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, have added blockHash
to BlockLog
!
// TODO(#7373) & TODO(#7246): Rollup.submitProof has changed to submitBlockRootProof/submitRootProof | ||
// The inputs below may change depending on which submit fn we are using when we have a verifier. | ||
it('verifies proof', async () => { | ||
const args = [ | ||
`0x${block.header.toBuffer().toString('hex')}`, | ||
`0x${block.archive.root.toBuffer().toString('hex')}`, | ||
`0x${proverId.toBuffer().toString('hex')}`, | ||
`0x${block.header.hash().toBuffer().toString('hex')}`, | ||
`0x${serializeToBuffer(aggregationObject).toString('hex')}`, | ||
`0x${proof.withoutPublicInputs().toString('hex')}`, | ||
] as const; | ||
|
||
await expect(rollupContract.write.submitProof(args)).resolves.toBeDefined(); | ||
await expect(rollupContract.write.submitBlockRootProof(args)).resolves.toBeDefined(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is currently unused (it fails on master as it's not linked up to any Honk L1 verifier) - I changed it so yarn build
didn't complain about incorrect inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a few comments just for the solidity part. I think we can solve some of it the "easy" way here (blockhashes) as we will need to deal with that as part of validator client and tx effects -> public call. So introducing a little early to avoid your membership checks seems worthwhile.
l1-contracts/src/core/Rollup.sol
Outdated
@@ -253,6 +304,162 @@ contract Rollup is Leonidas, IRollup { | |||
|
|||
emit L2ProofVerified(header.globalVariables.blockNumber, _proverId); | |||
} | |||
// TODO(#7346): Commented out for now as stack too deep (unused until batch rollups integrated anyway). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How long do you expect it to be like this? I don't really like having a big bunch of uncommented code as it often end up just being distracting when reading through the code or something that is broken when one finally tries to un-comment it because things have changes and it have been standing still 🤷
l1-contracts/src/core/Rollup.sol
Outdated
// TODO(#7346): Currently previous block hash is unchecked, but will be checked in batch rollup (block merge -> root). | ||
// block-building-helpers.ts is injecting as 0 for now, replicating here. | ||
// previous_block_hash: the block hash just preceding this block (will eventually become the end_block_hash of the prev batch) | ||
publicInputs[4] = bytes32(0); | ||
|
||
// TODO(#7346): Move archive membership proof to contract? | ||
// verifyMembership(archivePath, _previousBlockHash, header.globalVariables.blockNumber - 1, expectedLastArchive) | ||
|
||
// end_block_hash: the current block hash (will eventually become the hash of the final block proven in a batch) | ||
publicInputs[5] = _currentBlockHash; | ||
|
||
// TODO(#7346): Move archive membership proof to contract? | ||
// Currently archive root is updated by adding the new block hash inside block-root circuit. | ||
// verifyMembership(archivePath, _currentBlockHash, header.globalVariables.blockNumber, expectedArchive) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are currently storing the archive
in the BlockLog
to support non-sequential proving, and as we needed it as input to the proof validation anyway.
If we are going the "TxObjects" direction (meeting Thursday) we won't have the archive at that point, and it will instead make sense for us to store the BlockHash
as we need something to sign over for the committee anyway.
With that change, you don't need the membership check here as it would simply be reading the values and checking directly against those instead.
Shortterm, I think a fine approach to get the logic down would be to extend BlockLog
with the hash, and then you can simple read those or perform the check similarly to the archive checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work
function publishAndProcess( | ||
bytes calldata _header, | ||
bytes32 _archive, | ||
bytes32 _blockHash, | ||
SignatureLib.Signature[] memory _signatures, | ||
bytes calldata _body | ||
) external override(IRollup) { | ||
AVAILABILITY_ORACLE.publish(_body); | ||
process(_header, _archive, _signatures); | ||
process(_header, _archive, _blockHash, _signatures); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heads up you'll need to tweak the SUPPORTED_SIGS
in eth_log_handlers
if you change this. We should derive those dynamically from the abi instead of hardcoding them, so we don't forget to update them whenever we change the Rollup.sol interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, saved me lots of debugging time!
// start_global_variables | ||
publicInputs[i + 6] = globalVariablesFields[i]; | ||
// end_global_variables | ||
publicInputs[globalVariablesFields.length + i + 6] = globalVariablesFields[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd try to get this 6
into constants.sol, or at least into a constant in HeaderLib
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, shouldn't it be 9?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It starts at 6 because:
- PIs 0-5 are are the old archive, new archive, and block hash
- PIs 6-14 are the 'start global variables' (filled by
publicInputs[i + 6]
above) - PIs 15-23 are the 'end global variables' (filled by
publicInputs[globalVariablesFields.length +i+ 6]
above)
Basically, we have one block rather than a range so the pair of global variables are the same. I'm just using one loop to append them both.
I was thinking rather than hardcode indices I could just use some offset
and increment it with each push, to avoid any errors if anything changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah got it, my bad!
// TODO(Kev): For now we can add a test that this fits inside of | ||
// a u8. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😢
// TODO(Miranda): The below is a slow but working version for now. When we constrain either wonky or variable balanced rollups, | ||
// construct the below in unconstrained, then constrain with hints, like squashing in reset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the current goal of reducing constraints, maybe it's best to not accumulate fees for the same recipient and just concatenate for now, until we build it the unconstrained way?
} | ||
} | ||
|
||
// TODO(Miranda): Move into own file? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah
public previousBlockHash: Fr, | ||
public endBlockHash: Fr, | ||
// This is a u64 in nr, but GlobalVariables contains this as a u64 and is mapped to ts as a field, so I'm doing the same here | ||
public endTimestamp: Fr, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm where are we validating that the timestamp of the first block in the epoch is greater than the timestamp of the last block in the previous epoch? Feels like I may have missed that in the design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be rehydrating that previous header using the previousBlockhash
inside the circuit to make that check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, should we be rehydrating that header to make the assert_prev_block_rollups_follow_on_from_each_other
check to ensure both epochs "glue" together correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep correct - we are not currently checking the timestamp of last block in prev epoch vs first block in new epoch. Imho assert_prev_block_rollups_follow_on_from_each_other
wouldn't be the best place as there we are checking two groups of blocks that will end up in a single epoch together. Checking the 'first' timestamp would be wasted gates most of the time.
I think we could either:
- As you suggest, recreate the previous block's header using
previousBlockhash
inside the finalroot
circuit and check there, or - Check it on L1
I don't fully understand the new getTimestampForSlot
code in Rollup.sol
, but using that we can gather the start timestamp of the current epoch and the end timestamp of the previous without adding extra PIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you suggest, recreate the previous block's header using previousBlockhash inside the final root circuit and check there, or
I'd go with this one in order to save L1 gas. But we can push this for another PR, so this one doesn't keep growing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Orchestrator/prover changes look good to me 👍
@@ -79,7 +79,8 @@ describe('full_prover', () => { | |||
// fail the test. User asked for fixtures but we don't have any | |||
throw new Error('No block result found in test data'); | |||
} | |||
|
|||
// TODO(#6624): Note that with honk proofs the below writes incorrect test data to file. | |||
// The serialisation does not account for the prepended fields (circuit size, PI size, PI offset) in new Honk proofs, so the written data is shifted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AAAhhh!
getPreviousRollupBlockDataFromPublicInputs(rollupOutputLeft, rollupProofLeft, verificationKeyLeft), | ||
getPreviousRollupBlockDataFromPublicInputs(rollupOutputRight, rollupProofRight, verificationKeyRight), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible for the verification keys to be different?
LE: aaahhh.. wonky rollups?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep exactly! Am leaving it to be flexible in case we want wonkiness from block-root up to root. If we always want a balanced tree (e.g. always 32 block roots per root), this can become one vk.
previousMergeData[0]?.txsEffectsHash.toBuffer(), | ||
previousMergeData[1]?.txsEffectsHash.toBuffer(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, do we need optional chaining here? The compiler should be able to infer that the merge data objects are defined based on the if
statement above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Think it was leftover from copying to a new fn. Will remove the ?
s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Circuit-side looks good to me!
🤖 I have created a release *beep* *boop* --- <details><summary>aztec-package: 0.50.1</summary> ## [0.50.1](aztec-package-v0.50.0...aztec-package-v0.50.1) (2024-08-23) ### Miscellaneous * **aztec-package:** Synchronize aztec-packages versions </details> <details><summary>barretenberg.js: 0.50.1</summary> ## [0.50.1](barretenberg.js-v0.50.0...barretenberg.js-v0.50.1) (2024-08-23) ### Miscellaneous * **barretenberg.js:** Synchronize aztec-packages versions </details> <details><summary>aztec-packages: 0.50.1</summary> ## [0.50.1](aztec-packages-v0.50.0...aztec-packages-v0.50.1) (2024-08-23) ### Features * Free instances and circuits earlier to reduce max memory usage ([#8118](#8118)) ([32a04c1](32a04c1)) * Prepare protocol circuits for batch rollup ([#7727](#7727)) ([a126e22](a126e22)) * Share the commitment key between instances to reduce mem ([#8154](#8154)) ([c3dddf8](c3dddf8)) ### Bug Fixes * Cli-wallet manifest ([#8156](#8156)) ([2ffcda3](2ffcda3)) ### Miscellaneous * Replace relative paths to noir-protocol-circuits ([5372ac4](5372ac4)) * Requiring only 1 sig from user ([#8146](#8146)) ([f0b564b](f0b564b)) </details> <details><summary>barretenberg: 0.50.1</summary> ## [0.50.1](barretenberg-v0.50.0...barretenberg-v0.50.1) (2024-08-23) ### Features * Free instances and circuits earlier to reduce max memory usage ([#8118](#8118)) ([32a04c1](32a04c1)) * Share the commitment key between instances to reduce mem ([#8154](#8154)) ([c3dddf8](c3dddf8)) </details> --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
🤖 I have created a release *beep* *boop* --- <details><summary>aztec-package: 0.50.1</summary> ## [0.50.1](AztecProtocol/aztec-packages@aztec-package-v0.50.0...aztec-package-v0.50.1) (2024-08-23) ### Miscellaneous * **aztec-package:** Synchronize aztec-packages versions </details> <details><summary>barretenberg.js: 0.50.1</summary> ## [0.50.1](AztecProtocol/aztec-packages@barretenberg.js-v0.50.0...barretenberg.js-v0.50.1) (2024-08-23) ### Miscellaneous * **barretenberg.js:** Synchronize aztec-packages versions </details> <details><summary>aztec-packages: 0.50.1</summary> ## [0.50.1](AztecProtocol/aztec-packages@aztec-packages-v0.50.0...aztec-packages-v0.50.1) (2024-08-23) ### Features * Free instances and circuits earlier to reduce max memory usage ([#8118](AztecProtocol/aztec-packages#8118)) ([32a04c1](AztecProtocol/aztec-packages@32a04c1)) * Prepare protocol circuits for batch rollup ([#7727](AztecProtocol/aztec-packages#7727)) ([a126e22](AztecProtocol/aztec-packages@a126e22)) * Share the commitment key between instances to reduce mem ([#8154](AztecProtocol/aztec-packages#8154)) ([c3dddf8](AztecProtocol/aztec-packages@c3dddf8)) ### Bug Fixes * Cli-wallet manifest ([#8156](AztecProtocol/aztec-packages#8156)) ([2ffcda3](AztecProtocol/aztec-packages@2ffcda3)) ### Miscellaneous * Replace relative paths to noir-protocol-circuits ([5372ac4](AztecProtocol/aztec-packages@5372ac4)) * Requiring only 1 sig from user ([#8146](AztecProtocol/aztec-packages#8146)) ([f0b564b](AztecProtocol/aztec-packages@f0b564b)) </details> <details><summary>barretenberg: 0.50.1</summary> ## [0.50.1](AztecProtocol/aztec-packages@barretenberg-v0.50.0...barretenberg-v0.50.1) (2024-08-23) ### Features * Free instances and circuits earlier to reduce max memory usage ([#8118](AztecProtocol/aztec-packages#8118)) ([32a04c1](AztecProtocol/aztec-packages@32a04c1)) * Share the commitment key between instances to reduce mem ([#8154](AztecProtocol/aztec-packages#8154)) ([c3dddf8](AztecProtocol/aztec-packages@c3dddf8)) </details> --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
First run at creating new rollup circuits for batch block proving (see this PR for details).
Please note the e2e tests will fail miserably as the circuits are not yet linked up to the sequencer/prover/L1! Pushing for visibility.EDIT: Added support for verifying block-root proofs on L1. Though we don't currently have an L1 verifier (so tests would pass whatever public inputs we had), the method now accepts the new inputs until we have batch rollups integrated.
Changes complete:
root
toblock_root
and change outputsblock_merge
circuit and associated types/structsroot
circuit and associated types/structs (NB Github doesn't realise that old root -> block_root because of this new circuit, so the comparison is hard to read!)circuits.js
and useful methods tobb-prover
,circuit-types
, andnoir-protocol-circuits-types
prover-client
(orchestrator.ts
andblock-building-helpers.ts
) to use the newblock_root
public outputsRollup.sol
now verifies ablock_root
proof and storesblockHash
--
TODOs:
block_merge
orroot
, merge fees with the same recipient - MirandaEdit publisher and L1 to accept aComplete!block_root
proof with new public inputs (for testing, so e2es will pass)root
proof - Miranda + Phil's team?Make final L1 changes to verify batch proofs- Complete! Currently not tested with real solidity verifier, but bb verifier passes