Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Prepare protocol circuits for batch rollup #7727

Merged
merged 31 commits into from
Aug 23, 2024
Merged

Conversation

MirandaWood
Copy link
Contributor

@MirandaWood MirandaWood commented Aug 1, 2024

First run at creating new rollup circuits for batch block proving (see this PR for details).

Please note the e2e tests will fail miserably as the circuits are not yet linked up to the sequencer/prover/L1! Pushing for visibility.

EDIT: Added support for verifying block-root proofs on L1. Though we don't currently have an L1 verifier (so tests would pass whatever public inputs we had), the method now accepts the new inputs until we have batch rollups integrated.


Changes complete:

  • Rename root to block_root and change outputs
  • Add block_merge circuit and associated types/structs
  • Add new root circuit and associated types/structs (NB Github doesn't realise that old root -> block_root because of this new circuit, so the comparison is hard to read!)
  • Added new tyes ^ to circuits.js and useful methods to bb-prover, circuit-types, and noir-protocol-circuits-types
  • Made minor changes to prover-client (orchestrator.ts and block-building-helpers.ts) to use the new block_root public outputs
  • Rollup.sol now verifies a block_root proof and stores blockHash

--
TODOs:

  • When adding fees in a block_merge or root, merge fees with the same recipient - Miranda
  • Edit publisher and L1 to accept a block_root proof with new public inputs (for testing, so e2es will pass) Complete!
  • Teach the prover/sequencer to prove many blocks and submit a root proof - Miranda + Phil's team?
  • Make final L1 changes to verify batch proofs - Complete! Currently not tested with real solidity verifier, but bb verifier passes

end_global_variables: GlobalVariables, // Global variables for the last block in the range
out_hash: Field, // Merkle node of the L2-to-L1 messages merkle roots in the block range
fees: [FeeRecipient; 32], // Concatenation of all coinbase and fees for the block range
vk_tree_root: Field, // Root of allowed vk tree
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not mentioned in engineering-designs doc - added the vk tree root to check for allowed circuits in future.

Comment on lines 119 to 129
BlockRootOrBlockMergePublicInputs {
previous_archive: left.constants.last_archive, // archive before this block was added
new_archive: archive, // archive once this block was added
previous_block_hash: self.previous_block_hash,
end_block_hash: block_hash, // current newest block hash = this block hash
start_global_variables: left.constants.global_variables, // we have asserted that left.constants == right.constants => ...
end_global_variables: left.constants.global_variables, // with a current block range of 1, we only have 1 set of constants
out_hash: content_commitment.out_hash,
fees: fee_arr,
vk_tree_root: left.constants.vk_tree_root
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is mostly just a renamed version of current root_rollup_inputs, with these outputs as the main change.

Comment on lines -84 to -91
#[test(should_fail_with="input proofs have different constants")]
fn constants_different_fails() {
let mut inputs = default_merge_rollup_inputs();
inputs.previous_rollup_data[0].base_or_merge_rollup_public_inputs.constants.global_variables.chain_id = 1;
inputs.previous_rollup_data[1].base_or_merge_rollup_public_inputs.constants.global_variables.chain_id = 0;
let _output = inputs.merge_rollup_circuit();
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed this as it is a duplicate of the above test (cleaning up)

Comment on lines -17 to -18
// TODO(Miranda): remove? This appears to be unused
// Returns the hash truncated to one field element
Copy link
Contributor Author

@MirandaWood MirandaWood Aug 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed this code as it's unused and has been since I added this comment (March?)

inputs.previous_rollup_data = default_previous_rollup_data();

inputs
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is just the old root_rollup_inputs renamed - the below tests/root_rollup_inputs is a new file.

public async blockRootRollupCircuit(input: BlockRootRollupInputs): Promise<BlockRootOrBlockMergePublicInputs> {
const witnessMap = convertBlockRootRollupInputsToWitnessMap(input);

const witness = await this.wasmSimulator.simulateCircuit(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsure whether the new circuits should use this.wasmSimulator or this.simulationProvider? Used wasmSimulator to match with merge for now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given block root rollup is similar to the old root rollup, and that one used wasm, I'd stick with that one. Eventually we should build a NAPI/FFI interface for the simulator as well as for bb...

Copy link
Contributor

github-actions bot commented Aug 5, 2024

Changes to circuit sizes

Generated at commit: 26561e686ce4430fc0b8ed9ef40e18435aaf4595, compared to commit: dcfd12019fbe8e443c5d162876c960a7062164af

🧾 Summary (100% most significant diffs)

Program ACIR opcodes (+/-) % Circuit size (+/-) %
rollup_root +17,444 ❌ +2344.62% -1,309,633 ✅ -32.33%

Full diff report 👇
Program ACIR opcodes (+/-) % Circuit size (+/-) %
rollup_root 18,188 (+17,444) +2344.62% 2,741,553 (-1,309,633) -32.33%

@AztecBot
Copy link
Collaborator

AztecBot commented Aug 5, 2024

Benchmark results

Metrics with a significant change:

  • avm_simulation_time_ms (Token:transfer_public): 20.4 (-37%)
Detailed results

All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.

This benchmark source data is available in JSON format on S3 here.

Proof generation

Each column represents the number of threads used in proof generation.

Metric 1 threads 4 threads 16 threads 32 threads 64 threads
proof_construction_time_sha256_ms 5,771 1,565 717 (+1%) 777 (+2%) 779 (+1%)
proof_construction_time_sha256_30_ms 11,534 3,090 1,370 (-1%) 1,439 1,460 (-1%)
proof_construction_time_sha256_100_ms 44,158 11,834 5,415 5,489 (+2%) 5,646 (-1%)
proof_construction_time_poseidon_hash_ms 79.0 34.0 34.0 56.0 (-5%) 87.0 (-1%)
proof_construction_time_poseidon_hash_30_ms 1,534 424 202 (-1%) 233 (+1%) 267
proof_construction_time_poseidon_hash_100_ms 5,662 1,518 672 (-1%) 717 (-1%) 751

L2 block published to L1

Each column represents the number of txs on an L2 block published to L1.

Metric 4 txs 8 txs 16 txs
l1_rollup_calldata_size_in_bytes 4,356 (+1%) 7,876 14,884
l1_rollup_calldata_gas 50,196 (+1%) 92,902 (+1%) 178,120
l1_rollup_execution_gas 1,396,447 (+2%) 2,130,049 (+1%) 3,915,323 (+1%)
l2_block_processing_time_in_ms 241 (-2%) 441 (+2%) 799 (-2%)
l2_block_building_time_in_ms 8,929 (-2%) 17,434 34,821 (-1%)
l2_block_rollup_simulation_time_in_ms 8,928 (-2%) 17,434 34,821 (-1%)
l2_block_public_tx_process_time_in_ms 7,531 (-2%) 15,927 33,281 (-1%)

L2 chain processing

Each column represents the number of blocks on the L2 chain where each block has 8 txs.

Metric 3 blocks 5 blocks
node_history_sync_time_in_ms 2,973 (+2%) 3,825 (+3%)
node_database_size_in_bytes 12,669,008 16,691,280
pxe_database_size_in_bytes 16,254 26,813

Circuits stats

Stats on running time and I/O sizes collected for every kernel circuit run across all benchmarks.

Circuit simulation_time_in_ms witness_generation_time_in_ms input_size_in_bytes output_size_in_bytes proving_time_in_ms
private-kernel-init 90.3 (+2%) 379 (-8%) 21,755 44,860 N/A
private-kernel-inner 167 (+1%) 687 (-2%) 72,566 45,007 N/A
private-kernel-reset-tiny 458 (-1%) 839 (-3%) 65,675 44,846 N/A
private-kernel-tail 195 (-1%) 156 (-1%) 50,686 52,257 N/A
base-parity 5.53 (-1%) N/A 160 96.0 N/A
root-parity 35.0 (-1%) N/A 73,948 96.0 N/A
base-rollup 2,739 (-1%) N/A 189,136 664 N/A
block-root-rollup 40.9 N/A 58,205 2,448 N/A
public-kernel-setup 81.9 (-4%) N/A 105,085 71,222 N/A
public-kernel-app-logic 95.0 N/A 104,911 71,222 N/A
public-kernel-tail 550 (-1%) N/A 410,534 16,414 N/A
private-kernel-reset-small 455 (-1%) N/A 66,341 45,629 N/A
private-kernel-tail-to-public 1,480 (+6%) 621 (-4%) 460,796 1,825 N/A
public-kernel-teardown 81.3 (-2%) N/A 105,349 71,222 N/A
merge-rollup 20.0 N/A 38,174 664 N/A
undefined N/A N/A N/A N/A 80,947 (+2%)

Stats on running time collected for app circuits

Function input_size_in_bytes output_size_in_bytes witness_generation_time_in_ms
ContractClassRegisterer:register 1,344 11,731 346 (+1%)
ContractInstanceDeployer:deploy 1,408 11,731 18.0 (-2%)
MultiCallEntrypoint:entrypoint 1,920 11,731 405
FeeJuice:deploy 1,376 11,731 396 (+2%)
SchnorrAccount:constructor 1,312 11,731 73.6 (-3%)
SchnorrAccount:entrypoint 2,304 11,731 414
Token:privately_mint_private_note 1,280 11,731 100 (-6%)
FPC:fee_entrypoint_public 1,344 11,731 27.6 (-6%)
Token:transfer 1,312 11,731 231 (-1%)
Benchmarking:create_note 1,344 11,731 89.0
SchnorrAccount:verify_private_authwit 1,280 11,731 27.3
Token:unshield 1,376 11,731 520
FPC:fee_entrypoint_private 1,376 11,731 705 (+2%)

AVM Simulation

Time to simulate various public functions in the AVM.

Function time_ms bytecode_size_in_bytes
FeeJuice:_increase_public_balance 56.7 (+1%) 7,739
FeeJuice:set_portal 9.79 (+1%) 2,354
Token:constructor 80.3 (+3%) 26,051
FPC:constructor 53.2 (-1%) 18,001
FeeJuice:mint_public 41.6 (+6%) 5,877
Token:mint_public 45.0 (-1%) 10,917
Token:assert_minter_and_mint 65.5 (-1%) 7,512
AuthRegistry:set_authorized 37.6 (-20%) 4,391
FPC:prepare_fee 221 (-4%) 7,043
Token:transfer_public ⚠️ 20.4 (-37%) 39,426
FPC:pay_refund 50.6 (-13%) 10,234
Benchmarking:increment_balance 930 (-1%) 6,563
Token:_increase_public_balance 40.4 8,433
FPC:pay_refund_with_shielded_rebate 65.1 (+2%) 10,783

Public DB Access

Time to access various public DBs.

Function time_ms
get-nullifier-index 0.162 (+3%)

Tree insertion stats

The duration to insert a fixed batch of leaves into each tree type.

Metric 1 leaves 16 leaves 64 leaves 128 leaves 256 leaves 512 leaves 1024 leaves
batch_insert_into_append_only_tree_16_depth_ms 2.17 (-1%) 3.87 N/A N/A N/A N/A N/A
batch_insert_into_append_only_tree_16_depth_hash_count 16.8 31.7 N/A N/A N/A N/A N/A
batch_insert_into_append_only_tree_16_depth_hash_ms 0.112 (-1%) 0.109 N/A N/A N/A N/A N/A
batch_insert_into_append_only_tree_32_depth_ms N/A N/A 11.4 (-1%) 17.8 (-1%) 30.4 (-1%) 58.8 112 (-2%)
batch_insert_into_append_only_tree_32_depth_hash_count N/A N/A 95.9 159 287 543 1,055
batch_insert_into_append_only_tree_32_depth_hash_ms N/A N/A 0.110 (-1%) 0.103 0.0987 (-1%) 0.101 0.100 (-1%)
batch_insert_into_indexed_tree_20_depth_ms N/A N/A 14.4 25.2 (-3%) 43.5 82.7 160
batch_insert_into_indexed_tree_20_depth_hash_count N/A N/A 109 207 355 691 1,363
batch_insert_into_indexed_tree_20_depth_hash_ms N/A N/A 0.109 0.102 (-3%) 0.105 0.102 0.100
batch_insert_into_indexed_tree_40_depth_ms N/A N/A 16.3 (-1%) N/A N/A N/A N/A
batch_insert_into_indexed_tree_40_depth_hash_count N/A N/A 132 N/A N/A N/A N/A
batch_insert_into_indexed_tree_40_depth_hash_ms N/A N/A 0.104 (-1%) N/A N/A N/A N/A

Miscellaneous

Transaction sizes based on how many contract classes are registered in the tx.

Metric 0 registered classes 1 registered classes
tx_size_in_bytes 64,779 668,997

Transaction size based on fee payment method

| Metric | |
| - | |

Copy link
Collaborator

@spalladino spalladino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!!

(self.start_global_variables.eq(other.start_global_variables)) &
(self.end_global_variables.eq(other.end_global_variables)) &
(self.out_hash == other.out_hash) &
(self.fees.eq(other.fees)) &
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't know that Noir handled array equality, nice!

public async blockRootRollupCircuit(input: BlockRootRollupInputs): Promise<BlockRootOrBlockMergePublicInputs> {
const witnessMap = convertBlockRootRollupInputsToWitnessMap(input);

const witness = await this.wasmSimulator.simulateCircuit(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given block root rollup is similar to the old root rollup, and that one used wasm, I'd stick with that one. Eventually we should build a NAPI/FFI interface for the simulator as well as for bb...

@MirandaWood MirandaWood changed the title WIP: Batch Rollup Circuits feat: Prepare protocol circuits for batch rollup Aug 7, 2024
@@ -253,6 +304,162 @@ contract Rollup is Leonidas, IRollup {

emit L2ProofVerified(header.globalVariables.blockNumber, _proverId);
}
// TODO(#7346): Commented out for now as stack too deep (unused until batch rollups integrated anyway).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a first impl of verifyRootProof for future use, but couldn't compile the contract due to stack too deep error. We don't currently use it and don't have the full capabilities in ts/sol to use it anyway, so I commented it out. I can also remove it and add back in future so we have cleaner code!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long do you expect it to be like this? I don't really like having a big bunch of uncommented code as it often end up just being distracting when reading through the code or something that is broken when one finally tries to un-comment it because things have changes and it have been standing still 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree - not sure on when. We will use it when batch rollups are fully integrated (the main part of the work will be editing the sequencer/orchestrator code, which is under Phil's team) after provernet is up. The logic follows from the other verify function pretty clearly so can remove for now for cleanliness.

Comment on lines 247 to 260
// TODO(#7346): Currently previous block hash is unchecked, but will be checked in batch rollup (block merge -> root).
// block-building-helpers.ts is injecting as 0 for now, replicating here.
// previous_block_hash: the block hash just preceding this block (will eventually become the end_block_hash of the prev batch)
publicInputs[4] = bytes32(0);

// TODO(#7346): Move archive membership proof to contract?
// verifyMembership(archivePath, _previousBlockHash, header.globalVariables.blockNumber - 1, expectedLastArchive)

// end_block_hash: the current block hash (will eventually become the hash of the final block proven in a batch)
publicInputs[5] = _currentBlockHash;

// TODO(#7346): Move archive membership proof to contract?
// Currently archive root is updated by adding the new block hash inside block-root circuit.
// verifyMembership(archivePath, _currentBlockHash, header.globalVariables.blockNumber, expectedArchive)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty messy (sorry) for a few reasons:

  • As commented in a few places in the code, it would be a big change to extract and input the previousBlockHash for a block-root circuit (it's not used now, but required for batch rollups, but the prover doesn't 'know' yet about blocks that came before it AFAIK). For now, I've set it to 0 and it originates in block-building-helpers. When merge-root is used, it will check that left.end_block_hash == right.previous_block_hash. The very first previous_block_hash will be bubbled up to the final root where it will be checked on L1, but maybe the other checks (archive, block numbers, etc.) are sufficient?
  • The plan (as in this doc) is to move archive membership to L1. In this PR, membership is assured as block hashes are added to the (pedersen) archive tree in the block-root circuit to find the new archive root, so for now adding a membership check would be an unnecessary and high gas cost change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently storing the archive in the BlockLog to support non-sequential proving, and as we needed it as input to the proof validation anyway.

If we are going the "TxObjects" direction (meeting Thursday) we won't have the archive at that point, and it will instead make sense for us to store the BlockHash as we need something to sign over for the committee anyway.

With that change, you don't need the membership check here as it would simply be reading the values and checking directly against those instead.

Shortterm, I think a fine approach to get the logic down would be to extend BlockLog with the hash, and then you can simple read those or perform the check similarly to the archive checks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion, have added blockHash to BlockLog!

Comment on lines 183 to 195
// TODO(#7373) & TODO(#7246): Rollup.submitProof has changed to submitBlockRootProof/submitRootProof
// The inputs below may change depending on which submit fn we are using when we have a verifier.
it('verifies proof', async () => {
const args = [
`0x${block.header.toBuffer().toString('hex')}`,
`0x${block.archive.root.toBuffer().toString('hex')}`,
`0x${proverId.toBuffer().toString('hex')}`,
`0x${block.header.hash().toBuffer().toString('hex')}`,
`0x${serializeToBuffer(aggregationObject).toString('hex')}`,
`0x${proof.withoutPublicInputs().toString('hex')}`,
] as const;

await expect(rollupContract.write.submitProof(args)).resolves.toBeDefined();
await expect(rollupContract.write.submitBlockRootProof(args)).resolves.toBeDefined();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is currently unused (it fails on master as it's not linked up to any Honk L1 verifier) - I changed it so yarn build didn't complain about incorrect inputs.

@MirandaWood MirandaWood requested a review from LHerskind August 12, 2024 09:01
Copy link
Contributor

@LHerskind LHerskind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a few comments just for the solidity part. I think we can solve some of it the "easy" way here (blockhashes) as we will need to deal with that as part of validator client and tx effects -> public call. So introducing a little early to avoid your membership checks seems worthwhile.

@@ -253,6 +304,162 @@ contract Rollup is Leonidas, IRollup {

emit L2ProofVerified(header.globalVariables.blockNumber, _proverId);
}
// TODO(#7346): Commented out for now as stack too deep (unused until batch rollups integrated anyway).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long do you expect it to be like this? I don't really like having a big bunch of uncommented code as it often end up just being distracting when reading through the code or something that is broken when one finally tries to un-comment it because things have changes and it have been standing still 🤷

Comment on lines 247 to 260
// TODO(#7346): Currently previous block hash is unchecked, but will be checked in batch rollup (block merge -> root).
// block-building-helpers.ts is injecting as 0 for now, replicating here.
// previous_block_hash: the block hash just preceding this block (will eventually become the end_block_hash of the prev batch)
publicInputs[4] = bytes32(0);

// TODO(#7346): Move archive membership proof to contract?
// verifyMembership(archivePath, _previousBlockHash, header.globalVariables.blockNumber - 1, expectedLastArchive)

// end_block_hash: the current block hash (will eventually become the hash of the final block proven in a batch)
publicInputs[5] = _currentBlockHash;

// TODO(#7346): Move archive membership proof to contract?
// Currently archive root is updated by adding the new block hash inside block-root circuit.
// verifyMembership(archivePath, _currentBlockHash, header.globalVariables.blockNumber, expectedArchive)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently storing the archive in the BlockLog to support non-sequential proving, and as we needed it as input to the proof validation anyway.

If we are going the "TxObjects" direction (meeting Thursday) we won't have the archive at that point, and it will instead make sense for us to store the BlockHash as we need something to sign over for the committee anyway.

With that change, you don't need the membership check here as it would simply be reading the values and checking directly against those instead.

Shortterm, I think a fine approach to get the logic down would be to extend BlockLog with the hash, and then you can simple read those or perform the check similarly to the archive checks.

Copy link
Collaborator

@spalladino spalladino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work

Comment on lines 151 to 160
function publishAndProcess(
bytes calldata _header,
bytes32 _archive,
bytes32 _blockHash,
SignatureLib.Signature[] memory _signatures,
bytes calldata _body
) external override(IRollup) {
AVAILABILITY_ORACLE.publish(_body);
process(_header, _archive, _signatures);
process(_header, _archive, _blockHash, _signatures);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heads up you'll need to tweak the SUPPORTED_SIGS in eth_log_handlers if you change this. We should derive those dynamically from the abi instead of hardcoding them, so we don't forget to update them whenever we change the Rollup.sol interface.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, saved me lots of debugging time!

Comment on lines +267 to +270
// start_global_variables
publicInputs[i + 6] = globalVariablesFields[i];
// end_global_variables
publicInputs[globalVariablesFields.length + i + 6] = globalVariablesFields[i];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd try to get this 6 into constants.sol, or at least into a constant in HeaderLib

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, shouldn't it be 9?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It starts at 6 because:

  • PIs 0-5 are are the old archive, new archive, and block hash
  • PIs 6-14 are the 'start global variables' (filled by publicInputs[i + 6] above)
  • PIs 15-23 are the 'end global variables' (filled by publicInputs[globalVariablesFields.length +i+ 6] above)

Basically, we have one block rather than a range so the pair of global variables are the same. I'm just using one loop to append them both.
I was thinking rather than hardcode indices I could just use some offset and increment it with each push, to avoid any errors if anything changes?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah got it, my bad!

Comment on lines +81 to +82
// TODO(Kev): For now we can add a test that this fits inside of
// a u8.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😢

Comment on lines 115 to 116
// TODO(Miranda): The below is a slow but working version for now. When we constrain either wonky or variable balanced rollups,
// construct the below in unconstrained, then constrain with hints, like squashing in reset.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the current goal of reducing constraints, maybe it's best to not accumulate fees for the same recipient and just concatenate for now, until we build it the unconstrained way?

}
}

// TODO(Miranda): Move into own file?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah

public previousBlockHash: Fr,
public endBlockHash: Fr,
// This is a u64 in nr, but GlobalVariables contains this as a u64 and is mapped to ts as a field, so I'm doing the same here
public endTimestamp: Fr,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm where are we validating that the timestamp of the first block in the epoch is greater than the timestamp of the last block in the previous epoch? Feels like I may have missed that in the design.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be rehydrating that previous header using the previousBlockhash inside the circuit to make that check?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, should we be rehydrating that header to make the assert_prev_block_rollups_follow_on_from_each_other check to ensure both epochs "glue" together correctly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep correct - we are not currently checking the timestamp of last block in prev epoch vs first block in new epoch. Imho assert_prev_block_rollups_follow_on_from_each_other wouldn't be the best place as there we are checking two groups of blocks that will end up in a single epoch together. Checking the 'first' timestamp would be wasted gates most of the time.
I think we could either:

  • As you suggest, recreate the previous block's header using previousBlockhash inside the final root circuit and check there, or
  • Check it on L1

I don't fully understand the new getTimestampForSlot code in Rollup.sol, but using that we can gather the start timestamp of the current epoch and the end timestamp of the previous without adding extra PIs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you suggest, recreate the previous block's header using previousBlockhash inside the final root circuit and check there, or

I'd go with this one in order to save L1 gas. But we can push this for another PR, so this one doesn't keep growing.

@MirandaWood MirandaWood marked this pull request as ready for review August 13, 2024 12:18
Copy link
Contributor

@alexghr alexghr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orchestrator/prover changes look good to me 👍

@@ -79,7 +79,8 @@ describe('full_prover', () => {
// fail the test. User asked for fixtures but we don't have any
throw new Error('No block result found in test data');
}

// TODO(#6624): Note that with honk proofs the below writes incorrect test data to file.
// The serialisation does not account for the prepended fields (circuit size, PI size, PI offset) in new Honk proofs, so the written data is shifted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AAAhhh!

Comment on lines +303 to +304
getPreviousRollupBlockDataFromPublicInputs(rollupOutputLeft, rollupProofLeft, verificationKeyLeft),
getPreviousRollupBlockDataFromPublicInputs(rollupOutputRight, rollupProofRight, verificationKeyRight),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible for the verification keys to be different?
LE: aaahhh.. wonky rollups?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep exactly! Am leaving it to be flexible in case we want wonkiness from block-root up to root. If we always want a balanced tree (e.g. always 32 block roots per root), this can become one vk.

Comment on lines 424 to 425
previousMergeData[0]?.txsEffectsHash.toBuffer(),
previousMergeData[1]?.txsEffectsHash.toBuffer(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, do we need optional chaining here? The compiler should be able to infer that the merge data objects are defined based on the if statement above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! Think it was leftover from copying to a new fn. Will remove the ?s

Copy link
Collaborator

@LeilaWang LeilaWang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Circuit-side looks good to me!

@MirandaWood MirandaWood merged commit a126e22 into master Aug 23, 2024
96 checks passed
@MirandaWood MirandaWood deleted the mw/batch-rollup branch August 23, 2024 08:39
spypsy pushed a commit that referenced this pull request Aug 23, 2024
🤖 I have created a release *beep* *boop*
---


<details><summary>aztec-package: 0.50.1</summary>

##
[0.50.1](aztec-package-v0.50.0...aztec-package-v0.50.1)
(2024-08-23)


### Miscellaneous

* **aztec-package:** Synchronize aztec-packages versions
</details>

<details><summary>barretenberg.js: 0.50.1</summary>

##
[0.50.1](barretenberg.js-v0.50.0...barretenberg.js-v0.50.1)
(2024-08-23)


### Miscellaneous

* **barretenberg.js:** Synchronize aztec-packages versions
</details>

<details><summary>aztec-packages: 0.50.1</summary>

##
[0.50.1](aztec-packages-v0.50.0...aztec-packages-v0.50.1)
(2024-08-23)


### Features

* Free instances and circuits earlier to reduce max memory usage
([#8118](#8118))
([32a04c1](32a04c1))
* Prepare protocol circuits for batch rollup
([#7727](#7727))
([a126e22](a126e22))
* Share the commitment key between instances to reduce mem
([#8154](#8154))
([c3dddf8](c3dddf8))


### Bug Fixes

* Cli-wallet manifest
([#8156](#8156))
([2ffcda3](2ffcda3))


### Miscellaneous

* Replace relative paths to noir-protocol-circuits
([5372ac4](5372ac4))
* Requiring only 1 sig from user
([#8146](#8146))
([f0b564b](f0b564b))
</details>

<details><summary>barretenberg: 0.50.1</summary>

##
[0.50.1](barretenberg-v0.50.0...barretenberg-v0.50.1)
(2024-08-23)


### Features

* Free instances and circuits earlier to reduce max memory usage
([#8118](#8118))
([32a04c1](32a04c1))
* Share the commitment key between instances to reduce mem
([#8154](#8154))
([c3dddf8](c3dddf8))
</details>

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
AztecBot added a commit to AztecProtocol/barretenberg that referenced this pull request Aug 24, 2024
🤖 I have created a release *beep* *boop*
---


<details><summary>aztec-package: 0.50.1</summary>

##
[0.50.1](AztecProtocol/aztec-packages@aztec-package-v0.50.0...aztec-package-v0.50.1)
(2024-08-23)


### Miscellaneous

* **aztec-package:** Synchronize aztec-packages versions
</details>

<details><summary>barretenberg.js: 0.50.1</summary>

##
[0.50.1](AztecProtocol/aztec-packages@barretenberg.js-v0.50.0...barretenberg.js-v0.50.1)
(2024-08-23)


### Miscellaneous

* **barretenberg.js:** Synchronize aztec-packages versions
</details>

<details><summary>aztec-packages: 0.50.1</summary>

##
[0.50.1](AztecProtocol/aztec-packages@aztec-packages-v0.50.0...aztec-packages-v0.50.1)
(2024-08-23)


### Features

* Free instances and circuits earlier to reduce max memory usage
([#8118](AztecProtocol/aztec-packages#8118))
([32a04c1](AztecProtocol/aztec-packages@32a04c1))
* Prepare protocol circuits for batch rollup
([#7727](AztecProtocol/aztec-packages#7727))
([a126e22](AztecProtocol/aztec-packages@a126e22))
* Share the commitment key between instances to reduce mem
([#8154](AztecProtocol/aztec-packages#8154))
([c3dddf8](AztecProtocol/aztec-packages@c3dddf8))


### Bug Fixes

* Cli-wallet manifest
([#8156](AztecProtocol/aztec-packages#8156))
([2ffcda3](AztecProtocol/aztec-packages@2ffcda3))


### Miscellaneous

* Replace relative paths to noir-protocol-circuits
([5372ac4](AztecProtocol/aztec-packages@5372ac4))
* Requiring only 1 sig from user
([#8146](AztecProtocol/aztec-packages#8146))
([f0b564b](AztecProtocol/aztec-packages@f0b564b))
</details>

<details><summary>barretenberg: 0.50.1</summary>

##
[0.50.1](AztecProtocol/aztec-packages@barretenberg-v0.50.0...barretenberg-v0.50.1)
(2024-08-23)


### Features

* Free instances and circuits earlier to reduce max memory usage
([#8118](AztecProtocol/aztec-packages#8118))
([32a04c1](AztecProtocol/aztec-packages@32a04c1))
* Share the commitment key between instances to reduce mem
([#8154](AztecProtocol/aztec-packages#8154))
([c3dddf8](AztecProtocol/aztec-packages@c3dddf8))
</details>

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants