Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DA, op-batcher: imrovements tracker #10975

Open
protolambda opened this issue Jun 21, 2024 · 2 comments
Open

DA, op-batcher: imrovements tracker #10975

protolambda opened this issue Jun 21, 2024 · 2 comments
Labels
A-op-batcher Area: op-batcher

Comments

@protolambda
Copy link
Contributor

protolambda commented Jun 21, 2024

With actively changing blob fee market dynamics we need to:

  1. out-of-protocol: adapt the batch-submission operational functionality to maintain fee-accuracy and optimal op-batcher operation costs.
  2. in-protocol: adapt the fee-pricing to provide the lowest and most accurate fees to end-users.

Out of protocol

These changes can be implemented now, to mitigate any short-term issues.

Ideas:

  • Add back-pressure system, to reduce DA usage of sequencer, by lowering block throughput in terms of total L1 cost dynamically, once the batcher starts to lag behind in work. The sequencer block-building code could inspect the diff between the safe-head and unsafe-head, and the larger the diff, the less L1-cost could be included.

In protocol

These changes require a breaking change, and thus a coordinated network upgrade.

Ideas:

  • Dynamically swap pricing parameters between calldata and blob configuration, depending on the optimal choice of DA. This lowers costs for users automatically, with the assumption that the batcher is operated optimally also.
  • Monitor volatility in DA fees, and increase L1 fee parameters dynamically when high volatility is perceived, to derisk discontinuity between optimistic fees (time of L2 transaction) and realized fees (time of L1 transaction).
  • Support multiple batch-submitter addresses to submit for the same chain and inbox, to have more flexibility as batch-submitter, to work around transaction replacement / nonce issues.
    • Warning: this conflicts with the generalized batch-authentication idea of making a L1 contract emit an event to accept/deny data in the inbox.
    • Warning: this conflicts with a new initiative to introduce strict batch-ordering, which can reduce op-node startup/reset time significantly (no need to walk back L1 chain as much) and optimize proofs (much less execution due to much shorter L1 traversal). This is being suggested as a feature on top of the steady-batch-derivation project.
@tynes
Copy link
Contributor

tynes commented Jun 21, 2024

I want to explore the possibility of using ethereum-optimism/specs#221 rather than directly supporting multiple batcher addresses. Adding multiple batcher addresses to the protocol will add tech debt to the protocol where as using a smart contract to define logic is generic and can be extended permissionlessly to support many features. We would need to engineer around the fact that batches can be posted in both blobs and calldata, but with the right smart contract architecture this is certainly possible to handle. The general philosophy that the OP Stack should follow is creating powerful abstractions that enable permissionless innovation rather than hardcoding particular solutions to problems

@tynes
Copy link
Contributor

tynes commented Jul 7, 2024

Titan builder created a special RPC called eth_sendBlobs where you send an array of transactions with the same nonce where each transaction has a different number of blobs. It would be assumed that we build and sign a tx with 1 blob, 2 blobs, 3 blobs all the way up to 6 blobs. Each transaction would include the same blobs, the only difference would be including the next blob. This allows their builder to more easily build a block where there are competing blob transactions that cannot all fit into the same L1 block. See https://docs.titanbuilder.xyz/api/eth_sendblobs

@geoknee geoknee added the A-op-batcher Area: op-batcher label Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-op-batcher Area: op-batcher
Projects
None yet
Development

No branches or pull requests

3 participants