You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Parallel processing of L2 chain consolidation (DA derivation parallel to engine processing of p2p blocks): see events processing and later parallel deriver execution:
Add back-pressure system, to reduce DA usage of sequencer, by lowering block throughput in terms of total L1 cost dynamically, once the batcher starts to lag behind in work. The sequencer block-building code could inspect the diff between the safe-head and unsafe-head, and the larger the diff, the less L1-cost could be included.
In protocol
These changes require a breaking change, and thus a coordinated network upgrade.
Ideas:
Dynamically swap pricing parameters between calldata and blob configuration, depending on the optimal choice of DA. This lowers costs for users automatically, with the assumption that the batcher is operated optimally also.
Monitor volatility in DA fees, and increase L1 fee parameters dynamically when high volatility is perceived, to derisk discontinuity between optimistic fees (time of L2 transaction) and realized fees (time of L1 transaction).
Support multiple batch-submitter addresses to submit for the same chain and inbox, to have more flexibility as batch-submitter, to work around transaction replacement / nonce issues.
Warning: this conflicts with the generalized batch-authentication idea of making a L1 contract emit an event to accept/deny data in the inbox.
Warning: this conflicts with a new initiative to introduce strict batch-ordering, which can reduce op-node startup/reset time significantly (no need to walk back L1 chain as much) and optimize proofs (much less execution due to much shorter L1 traversal). This is being suggested as a feature on top of the steady-batch-derivation project.
The text was updated successfully, but these errors were encountered:
I want to explore the possibility of using ethereum-optimism/specs#221 rather than directly supporting multiple batcher addresses. Adding multiple batcher addresses to the protocol will add tech debt to the protocol where as using a smart contract to define logic is generic and can be extended permissionlessly to support many features. We would need to engineer around the fact that batches can be posted in both blobs and calldata, but with the right smart contract architecture this is certainly possible to handle. The general philosophy that the OP Stack should follow is creating powerful abstractions that enable permissionless innovation rather than hardcoding particular solutions to problems
Titan builder created a special RPC called eth_sendBlobs where you send an array of transactions with the same nonce where each transaction has a different number of blobs. It would be assumed that we build and sign a tx with 1 blob, 2 blobs, 3 blobs all the way up to 6 blobs. Each transaction would include the same blobs, the only difference would be including the next blob. This allows their builder to more easily build a block where there are competing blob transactions that cannot all fit into the same L1 block. See https://docs.titanbuilder.xyz/api/eth_sendblobs
With actively changing blob fee market dynamics we need to:
Out of protocol
These changes can be implemented now, to mitigate any short-term issues.
Ideas:
In protocol
These changes require a breaking change, and thus a coordinated network upgrade.
Ideas:
The text was updated successfully, but these errors were encountered: