You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The optimistic based interop scheme has this concept of dependencies, where a chain can depend on another chain. When a chain is defined as a dependency, it means that the dependent chain's outputs are required as inputs to the chain. All L2 OP Stack chains consider L1 as a dependency, this means that all L1 receipts and transactions are used as inputs to creating the L2 chain.
The special property of L1 as a dependency is that it has the ability to originate force include transactions, also known as deposit transactions. To have deterministic deposit transactions, there is a synchrony assumption with the L2's progression being dependent on the liveness and ability to connect to the L1. From time to time, the L2's connection to a L1 node is lost.
In the world of OP Stack interop, it is possible to define a dependency on another L2 chain. Between L2s, there is no concept of force include transactions. If there was, it would create massive synchrony issues - if a single remote chain falls out of being in sync at the tip, it could cause a cascading reorg or chain halt across all of the interoperable chains as the chain notices that it doesn't have all of the inputs required to progress correctly.
It is not scalable to require the outputs of another chain as inputs to a chain. In the longer term, we need some sort of succinct proof to reduce the need to fully execute dependent chains. One way to do this is to update the interop scheme so that it allows inbound messages from chains in the dependency set OR chains that have been zk proven. This can happen 100% at the application layer.
The solution involves 2 types of proofs that together are able to enable a backwards compatible API so that from the point of view of a developer, the exact same tooling for sending cross chain messages will just work.
Remote safe output extension proof
Log inclusion proof
The remote safe output extension proof exists to extend the "finalized" view of a remote chain. This can be thought about like how a light client syncs the headers of a chain. The proof program would need to handle committing to all of the execution of a chain between two points.
The log inclusion proof exists to bind an identifier + log hash as correctly paired and part of the canonical chain. This could in theory be a merkle proof, but we do not have access to every historical block header on chain. The proof program would need to walk back from a finalized anchor point and show that the log does exist in the chain at the given identifier.
This scheme is meant to remove the need to fully sync all chains that are interoperable. As long as one person is creating a zk proof for a chain, then that zk proof can be used to process inbound messages from that chain. This scheme still depends on governance, as you do not want to interop with chains that are not stage 1/2 or have backdoors in their system.
We may want to consider the possibility of requiring multiproof for this scheme and having a "maturation period" before the proofs can be considered finalized. During this maturation period, it could be possible to overturn the results of a proof. In a stage 2 like system, if there is a disagreement between the multiproof then it would result in the interop protocol being paused and only restartable by a security council.
The following pseudocode shows how this can be possible to implement. This does not include all possible details and is meant more as an exercise to show what could be possible. Modified from here.
// the L2 block number of remote chains by chain id, the block number that they have been proven up tomappingsafeOutputs(uint256 _chainId =>uint256_blockNumber);
// log hashes that have been proven to be included are in heremappingprovenLogs(bytes32 _hash =>bool_proven);
function _checkIdentifier(Identifier calldata_id, bytes32_logHash) internalview {
if (_id.timestamp >block.timestamp|| _id.timestamp <=interopStart()) revertInvalidTimestamp();
bool isInDependencySet =IDependencySet(Predeploys.L1_BLOCK_ATTRIBUTES).isInDependencySet(_id.chainId);
bool isZkProven =checkZkProven(_id, _logHash);
if (!isInDependencySet &&!isZkProven) {
revertInvalid();
}
}
function checkZkProven(Identifier calldata_id, bytes32_logHash) internal {
uint256 viewHeight = safeOutputs[_id.chainId];
bool proven = provenLogs[_logHash];
if (_id.blockNumber <= viewHeight && proven) returntrue;
returnfalse;
}
function proveChain(uint256_chainId, uint256_blockHeight, bytesmemory_proof) external {
bool verified =verifyProof(_chainId ,_blockHeight, _proof);
if (!verified) revertInvalidProof();
safeOutputs[_chainId] = Math.max(_blockHeight, safeOutputs[_chainId]);
}
function proveLog(bytes32_logHash, Identifier memory_id, bytesmemory_proof) external {
bool verified =verifyLogInclusion(_logHash, _id, _proof);
if (!verified) revertInvalidProof();
if (safeOutputs[_id.chainId] < _id.blockNumber) revertInvalidProof();
// TODO: it may be required to hash together the identifier and the log hash for safety purposes
provenLogs[_logHash] =true;
}
The text was updated successfully, but these errors were encountered:
Nice writeup! One quick question is what is the relationship of Interop ZK Proofs vs ZK Fault Proof? I feel they should have a lot of similarities but also subtle differences.
The optimistic based interop scheme has this concept of dependencies, where a chain can depend on another chain. When a chain is defined as a dependency, it means that the dependent chain's outputs are required as inputs to the chain. All L2 OP Stack chains consider L1 as a dependency, this means that all L1 receipts and transactions are used as inputs to creating the L2 chain.
The special property of L1 as a dependency is that it has the ability to originate force include transactions, also known as deposit transactions. To have deterministic deposit transactions, there is a synchrony assumption with the L2's progression being dependent on the liveness and ability to connect to the L1. From time to time, the L2's connection to a L1 node is lost.
In the world of OP Stack interop, it is possible to define a dependency on another L2 chain. Between L2s, there is no concept of force include transactions. If there was, it would create massive synchrony issues - if a single remote chain falls out of being in sync at the tip, it could cause a cascading reorg or chain halt across all of the interoperable chains as the chain notices that it doesn't have all of the inputs required to progress correctly.
It is not scalable to require the outputs of another chain as inputs to a chain. In the longer term, we need some sort of succinct proof to reduce the need to fully execute dependent chains. One way to do this is to update the interop scheme so that it allows inbound messages from chains in the dependency set OR chains that have been zk proven. This can happen 100% at the application layer.
The solution involves 2 types of proofs that together are able to enable a backwards compatible API so that from the point of view of a developer, the exact same tooling for sending cross chain messages will just work.
The remote safe output extension proof exists to extend the "finalized" view of a remote chain. This can be thought about like how a light client syncs the headers of a chain. The proof program would need to handle committing to all of the execution of a chain between two points.
The log inclusion proof exists to bind an identifier + log hash as correctly paired and part of the canonical chain. This could in theory be a merkle proof, but we do not have access to every historical block header on chain. The proof program would need to walk back from a finalized anchor point and show that the log does exist in the chain at the given identifier.
This scheme is meant to remove the need to fully sync all chains that are interoperable. As long as one person is creating a zk proof for a chain, then that zk proof can be used to process inbound messages from that chain. This scheme still depends on governance, as you do not want to interop with chains that are not stage 1/2 or have backdoors in their system.
We may want to consider the possibility of requiring multiproof for this scheme and having a "maturation period" before the proofs can be considered finalized. During this maturation period, it could be possible to overturn the results of a proof. In a stage 2 like system, if there is a disagreement between the multiproof then it would result in the interop protocol being paused and only restartable by a security council.
The following pseudocode shows how this can be possible to implement. This does not include all possible details and is meant more as an exercise to show what could be possible. Modified from here.
The text was updated successfully, but these errors were encountered: