Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Prepare protocol circuits for batch rollup #7727

Merged
merged 31 commits into from
Aug 23, 2024
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
c49bd33
feat: first run at nr changes for batch rollup
MirandaWood Jul 30, 2024
358c7b3
feat: add ts types, structs, tests (not yet impl in orch)
MirandaWood Jul 31, 2024
9b288e3
feat: handle vks for block root and block merge circuits
MirandaWood Aug 1, 2024
c740c10
chore: small fixes, cleanup, inject prev block hash
MirandaWood Aug 1, 2024
8859550
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 2, 2024
c031a20
chore: fmt, cleanup after adding prover_id via merge
MirandaWood Aug 2, 2024
6b97c3a
chore: better comments, more root -> block_root renaming, add new cir…
MirandaWood Aug 2, 2024
56c01ac
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 5, 2024
df113ce
feat: clean fee throw, add prev_block_hash to root pub inputs
MirandaWood Aug 5, 2024
b064be0
chore: fix for tests, more root -> block root renaming, comments
MirandaWood Aug 7, 2024
951981f
feat: accumulate fees, test, add more clarity comments
MirandaWood Aug 8, 2024
d84984e
feat: verify block root proofs on L1, add prover id to PIs
MirandaWood Aug 9, 2024
381da05
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 9, 2024
930ebea
feat: add vk root to final root rollup inputs, fix typo
MirandaWood Aug 12, 2024
82aa39a
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 12, 2024
abfca2b
feat: L1 process stores block hash, remove unused code
MirandaWood Aug 12, 2024
b23e646
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 12, 2024
605e192
fix: update eth log handlers, revert fee acc, some comments
MirandaWood Aug 13, 2024
644173b
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 13, 2024
eb3e283
fix: add epoch to block root/merge methods post merge
MirandaWood Aug 13, 2024
bb4815d
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 13, 2024
1f114aa
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 13, 2024
6d37433
feat: fixes after merge, use blockroot artifact, add proving todos
MirandaWood Aug 19, 2024
117d9d3
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 19, 2024
ef2a883
fix: post merge fixes
MirandaWood Aug 19, 2024
3127243
chore: forge fmt
MirandaWood Aug 19, 2024
3e2f55f
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 19, 2024
a657e05
fix: post merge fixes to publisher
MirandaWood Aug 19, 2024
bc17e45
chore: cleanup opt chaining in orchestrator
MirandaWood Aug 22, 2024
99064a1
Merge remote-tracking branch 'origin' into mw/batch-rollup
MirandaWood Aug 22, 2024
388bce3
chore: fmt + fix post merge
MirandaWood Aug 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
235 changes: 221 additions & 14 deletions l1-contracts/src/core/Rollup.sol
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,8 @@ contract Rollup is Leonidas, IRollup {
/**
* @notice Submit a proof for a block in the pending chain
*
* @dev TODO(#7346): Verify root proofs rather than block root when batch rollups are integrated.
*
* @dev Will call `_progressState` to update the proven chain. Notice this have potentially
* unbounded gas consumption.
*
Expand All @@ -184,13 +186,19 @@ contract Rollup is Leonidas, IRollup {
*
* @param _header - The header of the block (should match the block in the pending chain)
* @param _archive - The archive root of the block (should match the block in the pending chain)
* @param _proverId - The id of this block's prover
* _previousBlockHash - The poseidon hash of the previous block (should match the value in the previous archive tree)
* @param _currentBlockHash - The poseidon hash of this block (should match the value in the new archive tree)
* @param _aggregationObject - The aggregation object for the proof
* @param _proof - The proof to verify
*/
function submitProof(
function submitBlockRootProof(
bytes calldata _header,
bytes32 _archive,
bytes32 _proverId,
// TODO(#7246): Prev block hash unchecked for single blocks, should be checked for batch rollups. See block-building-helpers.ts for where to inject.
// bytes32 _previousBlockHash,
bytes32 _currentBlockHash,
bytes calldata _aggregationObject,
bytes calldata _proof
) external override(IRollup) {
Expand All @@ -213,23 +221,66 @@ contract Rollup is Leonidas, IRollup {
revert Errors.Rollup__InvalidProposedArchive(expectedArchive, _archive);
}

bytes32[] memory publicInputs =
new bytes32[](4 + Constants.HEADER_LENGTH + Constants.AGGREGATION_OBJECT_LENGTH);
// the archive tree root
publicInputs[0] = _archive;
// TODO(#7346): Currently verifying block root proofs until batch rollups fully integrated.
// Hence the below pub inputs are BlockRootOrBlockMergePublicInputs, which are larger than
// the planned set (RootRollupPublicInputs), for the interim.
// Public inputs are not fully verified (TODO(#7373))

bytes32[] memory publicInputs = new bytes32[](
Constants.BLOCK_ROOT_OR_BLOCK_MERGE_PUBLIC_INPUTS_LENGTH + Constants.AGGREGATION_OBJECT_LENGTH
);

// From block_root_or_block_merge_public_inputs.nr: BlockRootOrBlockMergePublicInputs.
// previous_archive.root: the previous archive tree root
publicInputs[0] = expectedLastArchive;
// previous_archive.next_available_leaf_index: the previous archive next available index
publicInputs[1] = bytes32(header.globalVariables.blockNumber);

// new_archive.root: the new archive tree root
publicInputs[2] = expectedArchive;
// this is the _next_ available leaf in the archive tree
// normally this should be equal to the block number (since leaves are 0-indexed and blocks 1-indexed)
// but in yarn-project/merkle-tree/src/new_tree.ts we prefill the tree so that block N is in leaf N
publicInputs[1] = bytes32(header.globalVariables.blockNumber + 1);

publicInputs[2] = vkTreeRoot;

bytes32[] memory headerFields = HeaderLib.toFields(header);
for (uint256 i = 0; i < headerFields.length; i++) {
publicInputs[i + 3] = headerFields[i];
// new_archive.next_available_leaf_index: the new archive next available index
publicInputs[3] = bytes32(header.globalVariables.blockNumber + 1);

// TODO(#7346): Currently previous block hash is unchecked, but will be checked in batch rollup (block merge -> root).
// block-building-helpers.ts is injecting as 0 for now, replicating here.
// previous_block_hash: the block hash just preceding this block (will eventually become the end_block_hash of the prev batch)
publicInputs[4] = bytes32(0);

// TODO(#7346): Move archive membership proof to contract?
// verifyMembership(archivePath, _previousBlockHash, header.globalVariables.blockNumber - 1, expectedLastArchive)

// end_block_hash: the current block hash (will eventually become the hash of the final block proven in a batch)
publicInputs[5] = _currentBlockHash;

// TODO(#7346): Move archive membership proof to contract?
// Currently archive root is updated by adding the new block hash inside block-root circuit.
// verifyMembership(archivePath, _currentBlockHash, header.globalVariables.blockNumber, expectedArchive)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty messy (sorry) for a few reasons:

  • As commented in a few places in the code, it would be a big change to extract and input the previousBlockHash for a block-root circuit (it's not used now, but required for batch rollups, but the prover doesn't 'know' yet about blocks that came before it AFAIK). For now, I've set it to 0 and it originates in block-building-helpers. When merge-root is used, it will check that left.end_block_hash == right.previous_block_hash. The very first previous_block_hash will be bubbled up to the final root where it will be checked on L1, but maybe the other checks (archive, block numbers, etc.) are sufficient?
  • The plan (as in this doc) is to move archive membership to L1. In this PR, membership is assured as block hashes are added to the (pedersen) archive tree in the block-root circuit to find the new archive root, so for now adding a membership check would be an unnecessary and high gas cost change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently storing the archive in the BlockLog to support non-sequential proving, and as we needed it as input to the proof validation anyway.

If we are going the "TxObjects" direction (meeting Thursday) we won't have the archive at that point, and it will instead make sense for us to store the BlockHash as we need something to sign over for the committee anyway.

With that change, you don't need the membership check here as it would simply be reading the values and checking directly against those instead.

Shortterm, I think a fine approach to get the logic down would be to extend BlockLog with the hash, and then you can simple read those or perform the check similarly to the archive checks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion, have added blockHash to BlockLog!


// For block root proof outputs, we have a block 'range' of just 1 block => start and end globals are the same
bytes32[] memory globalVariablesFields = HeaderLib.toFields(header.globalVariables);
for (uint256 i = 0; i < globalVariablesFields.length; i++) {
// start_global_variables
publicInputs[i + 6] = globalVariablesFields[i];
// end_global_variables
publicInputs[globalVariablesFields.length + i + 6] = globalVariablesFields[i];
Comment on lines +306 to +309
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd try to get this 6 into constants.sol, or at least into a constant in HeaderLib

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, shouldn't it be 9?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It starts at 6 because:

  • PIs 0-5 are are the old archive, new archive, and block hash
  • PIs 6-14 are the 'start global variables' (filled by publicInputs[i + 6] above)
  • PIs 15-23 are the 'end global variables' (filled by publicInputs[globalVariablesFields.length +i+ 6] above)

Basically, we have one block rather than a range so the pair of global variables are the same. I'm just using one loop to append them both.
I was thinking rather than hardcode indices I could just use some offset and increment it with each push, to avoid any errors if anything changes?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah got it, my bad!

}
// out_hash: root of this block's l2 to l1 message tree (will eventually be root of roots)
publicInputs[24] = header.contentCommitment.outHash;

// For block root proof outputs, we have a single recipient-value fee payment pair,
// but the struct contains space for the max (32) => we keep 31*2=62 fields blank to represent it.
// fees: array of recipient-value pairs, for a single block just one entry (will eventually be filled and paid out here)
publicInputs[25] = bytes32(uint256(uint160(header.globalVariables.coinbase)));
publicInputs[26] = bytes32(header.totalFees);
// publicInputs[27] -> publicInputs[88] left blank for empty fee array entries

publicInputs[headerFields.length + 3] = _proverId;
// vk_tree_root
publicInputs[89] = vkTreeRoot;
// prover_id: id of current block range's prover
publicInputs[90] = _proverId;

// the block proof is recursive, which means it comes with an aggregation object
// this snippet copies it into the public inputs needed for verification
Expand All @@ -240,7 +291,7 @@ contract Rollup is Leonidas, IRollup {
assembly {
part := calldataload(add(_aggregationObject.offset, mul(i, 32)))
}
publicInputs[i + 4 + Constants.HEADER_LENGTH] = part;
publicInputs[i + 91] = part;
}

if (!verifier.verify(_proof, publicInputs)) {
Expand All @@ -253,6 +304,162 @@ contract Rollup is Leonidas, IRollup {

emit L2ProofVerified(header.globalVariables.blockNumber, _proverId);
}
// TODO(#7346): Commented out for now as stack too deep (unused until batch rollups integrated anyway).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a first impl of verifyRootProof for future use, but couldn't compile the contract due to stack too deep error. We don't currently use it and don't have the full capabilities in ts/sol to use it anyway, so I commented it out. I can also remove it and add back in future so we have cleaner code!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long do you expect it to be like this? I don't really like having a big bunch of uncommented code as it often end up just being distracting when reading through the code or something that is broken when one finally tries to un-comment it because things have changes and it have been standing still 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree - not sure on when. We will use it when batch rollups are fully integrated (the main part of the work will be editing the sequencer/orchestrator code, which is under Phil's team) after provernet is up. The logic follows from the other verify function pretty clearly so can remove for now for cleanliness.

// /**
// * @notice Submit a proof for a range of blocks in the pending chain
// *
// * @dev TODO(#7346): Currently unused - integrate when batch rollups are integrated.
// *
// * @dev Will call `_progressState` to update the proven chain. Notice this have potentially
// * unbounded gas consumption.
// *
// * @dev Will emit `L2ProofVerified` if the proof is valid
// *
// * @dev Will throw if:
// * - The block number is past the pending chain
// * - The previous archive root does not match the archive root of the previous range's last block
// * - The new archive root does not match the archive root of the proposed range's last block
// * - The proof is invalid
// *
// * @dev We provide the `_archive` and `_previousArchive` even if it could be read from storage itself because it allow for
// * better error messages. Without passing it, we would just have a proof verification failure.
// *
// * @dev Following the `BlockLog` struct assumption
// *
// * @param _previousArchive - The archive root of the last block in the previous proven range
// * @param _archive - The archive root of the last block in the range
// * @param _previousBlockHash - The poseidon hash of the last block in the previous proven range (should match the value in the previous archive tree)
// * @param _currentBlockHash - The poseidon hash of the last block in this range (should match the value in the new archive tree)
// * @param outHash - The root of roots of the blocks' l2 to l1 message tree
// * @param coinbases - The recipients of the fees for each block in the range (max 32)
// * @param fees - The fees to be paid for each block in the range (max 32)
// * @param _proverId - The id of this block's prover
// * @param _aggregationObject - The aggregation object for the proof
// * @param _proof - The proof to verify
// */
// function submitRootProof(
// bytes32 _previousArchive,
// bytes32 _archive,
// bytes32 _previousBlockHash,
// bytes32 _currentBlockHash,
// bytes32 outHash,
// address[32] calldata coinbases,
// uint256[32] calldata fees,
// bytes32 _proverId,
// bytes calldata _aggregationObject,
// bytes calldata _proof
// ) external override(IRollup) {
// // TODO(#7346): The below assumes that the range of blocks being proven is always the 'next' range,
// // does not allow for any 'gaps'. Maybe we should allow gaps to avoid someone holding up the chain.
// uint256 startBlockNumber = provenBlockCount + 1;
// uint256 endBlockNumber = pendingBlockCount;

// // TODO: For now, while this fn is unused, checking input prev and current archives against expected.
// // It may be better to input block numbers and gather archives from there.
// bytes32 expectedLastArchive = blocks[startBlockNumber - 1].archive;
// bytes32 expectedArchive = blocks[endBlockNumber].archive;

// // We do it this way to provide better error messages than passing along the storage values
// // TODO(#4148) Proper genesis state. If the state is empty, we allow anything for now.
// if (expectedLastArchive != bytes32(0) && _previousArchive != expectedLastArchive) {
// revert Errors.Rollup__InvalidArchive(expectedLastArchive, _previousArchive);
// }

// // TODO: Below assumes the end state after proving this range of blocks cannot be 0, correct?
// if (expectedArchive == bytes32(0)) {
// revert Errors.Rollup__TryingToProveNonExistingBlock();
// }

// if (_archive != expectedArchive) {
// revert Errors.Rollup__InvalidProposedArchive(expectedArchive, _archive);
// }

// // TODO(#7346): Add a constant with calculated len of RootRollupPublicInputs:
// // Currently 64 for fees (32 * 2) + 4 for archives (2 * 2) + 6 for indiv. fields
// // Public inputs are not fully verified (TODO(#7373))

// bytes32[] memory publicInputs =
// new bytes32[](74 + Constants.AGGREGATION_OBJECT_LENGTH);

// // From root_rollup_public_inputs.nr RootRollupPublicInputs.
// // previous_archive.root: the previous archive tree root
// publicInputs[0] = expectedLastArchive;
// // previous_archive.next_available_leaf_index: the previous archive next available index
// publicInputs[1] = bytes32(startBlockNumber);

// // end_archive.root: the new archive tree root
// publicInputs[2] = expectedArchive;
// // this is the _next_ available leaf in the archive tree
// // normally this should be equal to the block number (since leaves are 0-indexed and blocks 1-indexed)
// // but in yarn-project/merkle-tree/src/new_tree.ts we prefill the tree so that block N is in leaf N
// // end_archive.next_available_leaf_index: the new archive next available index
// publicInputs[3] = bytes32(endBlockNumber + 1);

// // previous_block_hash: the block hash of block number startBlockNumber - 1
// publicInputs[4] = _previousBlockHash;

// // verifyMembership(archivePath, _previousBlockHash, startBlockNumber - 1, expectedLastArchive)

// // end_timestamp: TODO: is this the correct timestamp for public inputs?
// publicInputs[5] = bytes32(lastBlockTs);

// // end_block_hash: the block hash of block number endBlockNumber
// publicInputs[6] = _currentBlockHash;

// // verifyMembership(archivePath, _currentBlockHash, endBlockNumber, expectedArchive)

// // out_hash: the root of roots of each block's l2 to l1 message tree
// publicInputs[7] = outHash;

// // TODO(Miranda):
// // Current outbox takes a single block's set of l2 to l1 messages where the outHash represents the root
// // of a wonky tree, where each leaf is itself a small tree of each tx's l2 to l1 messages.
// // For #7346 we need this outHash to represent multiple blocks' outHashes.
// // OUTBOX.insert(
// // endBlockNumber, outHash, l2ToL1TreeMinHeight
// // );

// // fees: array of recipient-value pairs
// for (uint256 i = 0; i < 32; i++) {
// publicInputs[2*i + 8] = bytes32(uint256(uint160(coinbases[i])));
// publicInputs[2*i + 9] = bytes32(fees[i]);
// // TODO(#7346): Move payout of fees here from process()
// // if (coinbases[i] != address(0) && fees[i] > 0) {
// // GAS_TOKEN.transfer(coinbases[i], fees[i]);
// // }
// }

// // prover_id: id of current block range's prover
// publicInputs[73] = _proverId;

// for (uint256 i = 0; i < 74; i++) {
// console.logBytes32(publicInputs[i]);
// }

// // the block proof is recursive, which means it comes with an aggregation object
// // this snippet copies it into the public inputs needed for verification
// // it also guards against empty _aggregationObject used with mocked proofs
// uint256 aggregationLength = _aggregationObject.length / 32;
// for (uint256 i = 0; i < Constants.AGGREGATION_OBJECT_LENGTH && i < aggregationLength; i++) {
// bytes32 part;
// assembly {
// part := calldataload(add(_aggregationObject.offset, mul(i, 32)))
// }
// publicInputs[i + 74] = part;
// }

// if (!verifier.verify(_proof, publicInputs)) {
// revert Errors.Rollup__InvalidProof();
// }

// for (uint256 i = startBlockNumber; i < endBlockNumber; i++) {
// blocks[i].isProven = true;
// }

// _progressState();

// emit L2ProofVerified(endBlockNumber, _proverId);
// }

/**
* @notice Progresses the state of the proven chain as far as possible
Expand Down
18 changes: 17 additions & 1 deletion l1-contracts/src/core/interfaces/IRollup.sol
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,29 @@ interface IRollup {

function process(bytes calldata _header, bytes32 _archive) external;

function submitProof(
function submitBlockRootProof(
bytes calldata _header,
bytes32 _archive,
bytes32 _proverId,
// bytes32 _previousBlockHash,
bytes32 _currentBlockHash,
bytes calldata _aggregationObject,
bytes calldata _proof
) external;

// TODO(#7346): Integrate batch rollups
// function submitRootProof(
// bytes32 _previousArchive,
// bytes32 _archive,
// bytes32 _previousBlockHash,
// bytes32 _currentBlockHash,
// bytes32 outHash,
// address[32] calldata coinbases,
// uint256[32] calldata fees,
// bytes32 _proverId,
// bytes calldata _aggregationObject,
// bytes calldata _proof
// ) external;

function setVerifier(address _verifier) external;
}
5 changes: 4 additions & 1 deletion l1-contracts/src/core/libraries/ConstantsGen.sol
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,9 @@ library Constants {
uint256 internal constant ROOT_PARITY_INDEX = 19;
uint256 internal constant BASE_ROLLUP_INDEX = 20;
uint256 internal constant MERGE_ROLLUP_INDEX = 21;
uint256 internal constant ROOT_ROLLUP_INDEX = 22;
uint256 internal constant BLOCK_ROOT_ROLLUP_INDEX = 22;
uint256 internal constant BLOCK_MERGE_ROLLUP_INDEX = 23;
uint256 internal constant ROOT_ROLLUP_INDEX = 24;
uint256 internal constant FUNCTION_SELECTOR_NUM_BYTES = 4;
uint256 internal constant ARGS_HASH_CHUNK_LENGTH = 16;
uint256 internal constant ARGS_HASH_CHUNK_COUNT = 16;
Expand Down Expand Up @@ -193,6 +195,7 @@ library Constants {
uint256 internal constant KERNEL_CIRCUIT_PUBLIC_INPUTS_LENGTH = 417;
uint256 internal constant CONSTANT_ROLLUP_DATA_LENGTH = 12;
uint256 internal constant BASE_OR_MERGE_PUBLIC_INPUTS_LENGTH = 29;
uint256 internal constant BLOCK_ROOT_OR_BLOCK_MERGE_PUBLIC_INPUTS_LENGTH = 91;
uint256 internal constant GET_NOTES_ORACLE_RETURN_LENGTH = 674;
uint256 internal constant NOTE_HASHES_NUM_BYTES_PER_BASE_ROLLUP = 2048;
uint256 internal constant NULLIFIERS_NUM_BYTES_PER_BASE_ROLLUP = 2048;
Expand Down
29 changes: 29 additions & 0 deletions l1-contracts/src/core/libraries/HeaderLib.sol
Original file line number Diff line number Diff line change
Expand Up @@ -251,4 +251,33 @@ library HeaderLib {

return fields;
}

// TODO(#7346): Currently using the below to verify block root proofs until batch rollups fully integrated.
// Once integrated, remove the below fn (not used anywhere else).
function toFields(GlobalVariables memory _globalVariables)
internal
pure
returns (bytes32[] memory)
{
bytes32[] memory fields = new bytes32[](Constants.GLOBAL_VARIABLES_LENGTH);

fields[0] = bytes32(_globalVariables.chainId);
fields[1] = bytes32(_globalVariables.version);
fields[2] = bytes32(_globalVariables.blockNumber);
fields[3] = bytes32(_globalVariables.slotNumber);
fields[4] = bytes32(_globalVariables.timestamp);
fields[5] = bytes32(uint256(uint160(_globalVariables.coinbase)));
fields[6] = bytes32(_globalVariables.feeRecipient);
fields[7] = bytes32(_globalVariables.gasFees.feePerDaGas);
fields[8] = bytes32(_globalVariables.gasFees.feePerL2Gas);

// fail if the header structure has changed without updating this function
if (fields.length != Constants.GLOBAL_VARIABLES_LENGTH) {
// TODO(Miranda): Temporarily using this method and below error while block-root proofs are verified
// When we verify root proofs, this method can be removed => no need for separate named error
revert Errors.HeaderLib__InvalidHeaderSize(Constants.HEADER_LENGTH, fields.length);
}

return fields;
}
}
Loading