Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Based Sequencing with Soft Confirmations #408

Open
3 of 4 tasks
preston-evans98 opened this issue Jun 13, 2023 · 0 comments · Fixed by #510, #535 or #560
Open
3 of 4 tasks

Based Sequencing with Soft Confirmations #408

preston-evans98 opened this issue Jun 13, 2023 · 0 comments · Fixed by #510, #535 or #560
Assignees

Comments

@preston-evans98
Copy link
Member

preston-evans98 commented Jun 13, 2023

Background

The module system currently has work in-progress to enable "based sequencing". That will be great for censorship resistance, but it sacrifices the sequencer's ability to provide near-instant soft confirmations. Since no sequencer has a "lock" on global state, they can't guarantee that no other transaction will come in and mutate the state that they depended on.

For example, suppose that Alice wants to make a trade on Uniswap swapping one $ETH for 1000 $DAI with a slippage tolerance of 1%. Accordingly, she signs a transaction and sends it to a sequencer, Steve, who gives her a soft-confirmation that the trade will go through. At about the same time, Bob sends a transaction to another sequencer Tom, which swaps 1000 $ETH for $DAI, moving the price by 30%. Now, if Tom's block lands on the DA layer in front of Steve's the soft confirmation for Alice will be voided.

To provide a competitive UX, we would very much like to fix that flaw.

Naive Implementation: Based Sequencing with "Preferred Sequencer"

A naive approach to this problem would be to enshrine some centralized sequencer and guarantee that they will always get first execution rights in each DA layer slot. So, instead of processing blobs in exactly the order they appear on the DA layer, the rollup always processes the blob from the preferred sequencer as if it had appeared at the first index in its slot.

Unfortunately, this approach is vulnerable to censorship. To be concrete, suppose that at the end of DA slot 100 there's a significant arbitrage between price of $ETH on two Uniswap v2 vs v3. By default, the preferred sequencer will get to extract this arbitrage, since their blob will be processed as if it had appeared at the very beginning of slot 101. But a smart DA layer block proposer will want to extract the arbitrage for themself. So, they'll have an incentive to create their own blob which extracts the arbitrage, and censor the preferred sequencer to prevent their blob from being processed.

Getting Smarter: Simulating Multiple Proposers

To eliminate the incentive for censorship, we need to ensure that the DA layer block proposer can't get their transaction processed if they censor the Preferred Sequencer. One possible approach would be to return to centralized sequencing, but that is undesirable because of the risk of censorship. A smarter approach is to require multiple DA layer block proposers to collude in order to censor effectively, while simultaneously encouraging each individual proposer to defect from the cartel. In other words, we want to ensure that non-censorship is the dominant strategy in block proposal creation.

To do that, we need to alter the rollup's state transition rule. Previously, our proposed rule looked like this:

let blobs_to_process = vec![];
// first, apply the blob from the preferred sequencer
for blob in blobs {
   if blob.sender() == PREFERRED_SEQUENCER {
     rollup.apply_blob(blob);
  } else { 
     blobs_to_process.push(blob)
  } 
}
//  then, apply all the other blobs
for blob in blobs_to_process {
   rollup.apply_blob(blob);
}

We can alter that rule to defer execution of blobs from non-preferred sequencers for an even longer period. Instead of processing blobs from other sequencers within the current slot, we can save them (or, in practice, a commitment to them) in our proof output, and process them as part of the next slot:

let blobs_to_defer = vec![];
// first, apply the blob from the preferred sequencer
for blob in blobs {
   if blob.sender() == PREFERRED_SEQUENCER {
     rollup.apply_blob(blob);
  } else { 
     blobs_to_process.push(blob)
  } 
}

let deferred_blob_root = make_merkle_tree(blobs_to_defer);
let blobs_to_process = Zkvm::read_from_host();
ensure!(make_merkle_tree(blobs_to_process) == old_deferred_blob_root);

for blob in blobs_to_process {
   rollup.apply_blob(blob);
}
Zkvm::commit(deferred_blob_root);

This way, even if the current block proposer censors the preferred sequencer and inserts his own arbitrage transaction onto the DA layer, his transaction won't be processed until the end of the next slot. And since the next DA layer proposer doesn't have anything to gain by censoring the preferred sequencer, we can expect that the sequencer's bundle will land in that slot and extract the arbitrage.

We can carry this idea even further by deferring blob execution for an even longer time. Instead of processing "based" blobs in the next slot, we can process them 3 or 5 blocks in the future. The more sequencers will have to collude in order to extract their own arbitrage - and the less incentive any of them will have to do so.

Implementation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment