-
Notifications
You must be signed in to change notification settings - Fork 47
Jovian: DA footprint block limit #317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
dbe366c
to
372ebdd
Compare
protocol/calldata-block-limit.md
Outdated
block's gas used field will be the total calldata footprint instead. This may impact some analytics-based services | ||
like block explorers and those services need to be educated about the necessary steps to adapt their services to this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could imagine more than just analytic services, developer tools like cast that might have sanity validation could find themselves in error when the sum no longer represents the actual sum.
For block explorers -- how will they know that footprint is the cause of the block fullness, besides just forensically comparing the gas used fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They have to compare block.gas_used
to sum(tx.gas_used)
and if it's larger, then it's the sum(tx.da_footprint)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a data indexing check on a transaction gas used = block gas used.
So we'll need to update this on our side.
We should prepare a "change doc" for data providers. Some are still missing the blob receipt fields still.
7dc1ff4
to
3b6635e
Compare
protocol/da-footprint-block-limit.md
Outdated
Current L1 throughput is 6 blobs per block at target with a max of 9 blobs per block. | ||
This is approx. `128 kB * 6 / 12 s = 64 kB/s` or `96 kB/s`, resp. The L1 cost is proportional to the L1 origin’s base | ||
fee and blob base fee. But Most OP Stack chains have gas targets high enough to accept far more than 128kb of calldata | ||
per (2s) block. So the calldata floor cost doesn't nearly limit calldata throughput enough to prevent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trails off here. I think you mean priority fee auctions. Recommend introducing PFA acronym.
services need to be educated about the necessary steps to adapt their services to this new calculation of the block gas | ||
used. | ||
|
||
## Alternatives Considered |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about alternatives to repurposing the block.gasUsed
field? What are the tradeoffs if we were to use a net-new field instead?
|
||
### DA footprint gas scalar | ||
|
||
I propose to make the DA footprint gas scalar parameter configurable via the `SystemConfig`. The next Ethereum hardfork |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps clarify who can change the configuration, i.e. SystemConfigOwner
.
| 800 | 50,000 | 17 (0.03%) | 23 (0.04%) | 5.5% | 199.8% | | ||
|
||
|
||
#### Base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should double check these figures against estimates from the base team cc @niran .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the main thing that jumped out at me were that our throttled DA limit of 192kb (May 28 - Aug 8) typically puts us below the block gas target. As a result, I've been assuming that typical blocks at target exceed 192kb, but this still needs to be verified either way. If that's true, the percentage of blocks where the scaled DA exceeds the gas usage should be much higher.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wrong about this: I think our typical blocks were only near 192kb before our elasticity multiplier change on June 18, so only about 50 days of the 120 period would allow such blocks, and even then, only under calldata stress.
For the past hour or so of blocks, I get:
Percentile | Actual DA Bytes | Estimated DA Bytes |
---|---|---|
p50 | 40583 | 70891 |
p90 | 56649 | 102646 |
p99 | 70145 | 134809 |
max | 115452 | 176597 |
Sorry for the confusion, and thanks for the great analysis, @chuxinh!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From today's review session:
- We would like to double check the data analysis to ensure it matches napkin math e.g. around throttling
- We would like to consider if we can modify the design to allow for a higher target compared to the gas limit
- We are aligned on the overall design as the best we have yet considered for addressing the core problem
- We are also aligned on having the scalar be configurable
|
||
The following tables show, for each analyzed chain, the resulting statistics. | ||
* *Scalar*: DA footprint gas scalar value. | ||
* *Effective Limit*: The DA usage limit that the given gas scalar would imply (`block_gas_limit / da_footprint_gas_scalar`), in estimated compressed bytes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's also an implied "Effective Target" for DA usage of block_gas_limit / da_footprint_gas_scalar / elasticity_multiplier
. For chains with elasticity_multiplier == 2
and desired DA limits much lower than the L1's total capacity, this isn't much of an issue. But Base has elasticity_multiplier == 3
, which results in a target that is 1/3 of the limit.
One approach that would address this would be to move from a configurable da_footprint_gas_scalar
to a configurable da_usage_target
and da_usage_limit
. For each block, we'd first ensure that da_usage_estimate <= da_usage_limit
for the block to be valid. Then to calculate the change in base fee, we'd calculate the excess_da_usage_estimate
, then multiply it by block_gas_limit / da_usage_limit
to get a excess_da_footprint
value that is comparable to excess_gas_used
, and can potentially be negative. We'd calculate da_footprint = min(block_gas_limit / elasticity_multiplier + excess_da_footprint, block_gas_limit)
, which can also potentially be negative. Finally, we'd set gas_used = max(sum(tx.gas_used), da_footprint)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can proceed with just a single scalar. Here's how I think we'd approach setting the scalar.
No chain can sustain more than 64 kb/s of compressed data because that's the target throughput for the entire blobspace, so we all have l1_target_throughput = 64000
.
da_footprint_gas_scalar = block_gas_limit / (block_time * l1_target_throughput * estimation_ratio * elasticity_multiplier)
For Base, our estimated DA sizes in production seem to be about 50-100% higher than the actual sizes, giving us an estimation_ratio = 1.5
. With an elasticity of 3, a block time of two seconds, and a block gas limit of 150M, we get:
da_footprint_gas_scalar = 150,000,000 / (2 * 64000 * 1.5 * 3) = 260.4
The base fee should only begin to rise when the DA footprint is above the block gas target. To double-check that, we calculate da_usage_target = block_gas_limit / elasticity_multiplier / da_footprint_gas_scalar = 150,000,000 / 3 / 260 = 192,307
, which is a good point for Base to start pricing out DA usage. Unfortunately, this produces a da_usage_limit = block_gas_limit / da_footprint_gas_scalar = 150,000,000 / 260 = 576,923
, which is significantly higher than the always_throttle
value we already use. But I think that's okay! We would get base fees that increase when estimated DA usage is above 192kb per block, and would rely on sequencer throttling to enforce the maximum in practice.
Assuming an estimation_ratio = 1.5
for all OP Stack chains, here are the values that price out DA throughput above 64 kb/s.
OP Mainnet = 40,000,000 / (2 * 64000 * 1.5 * 2) = 104.2
Ink = 30,000,000 / (1 * 64000 * 1.5 * 2) = 156.25
Unichain = 30,000,000 / (1 * 64000 * 1.5 * 2) = 156.25
Soneium = 40,000,000 / (2 * 64000 * 1.5 * 2) = 104.2
Mode = 30,000,000 / (2 * 64000 * 1.5 * 2) = 78.125
World Chain = 60,000,000 / (2 * 64000 * 1.5 * 2) = 156.25
Since most chains don't need to be concerned about a DA target that prices out usage, they can focus on using the scalar for the DA limit rather than the target. Multiplying each of those values by elasticity_multiplier * blob_target / blob_limit = 2 * 6 / 9 = 4/3
will produce scalars that prevent a single chain from ever exceeding the throughput of the blob limit on its own. Scalar values higher than that can be used to target a particular share of blob throughput.
Setting all chains to da_footprint_gas_scalar = 260
shouldn't cause problems for any chain (though World Chain would want to change this if they approach 60% of blob throughput). The original proposal of 800 is probably fine for chains that expect to use less than 10% of blob throughput.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be even simpler to have the configurable value be something like estimated_da_target
, which would be the L1's target throughput per second, expressed as the estimated FastLZ equivalent. In other words, estimated_da_target = l1_target_throughput * estimation_ratio
. Then this configured value would only need to change when blob capacity changes, and it would likely be the same for every OP Stack chain. The actual da_footprint_gas_scalar
would be calculated on demand by block_gas_limit / (block_time * estimated_da_target * elasticity_multiplier)
. All of those values come from the chain specification or system config.
(This approach also works if we want estimated_da_limit
to be what we configure instead of the target. But configuring either the target or the limit seems smoother to operate than configuring the scalar directly.)
as the sum total of all transaction's gas used. However, it is proposed to just repurpose a block's `gas_used` field to | ||
hold the maximum over both resources' totals: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect gas_used
header fields that don't match the sum of the gas used by each transaction to break something somewhere. I don't know what would break, but I'd be surprised if nothing is relying on that invariant.
0f085fb
to
9100007
Compare
Co-authored-by: George Knee <george@oplabs.co>
9100007
to
17fa8cf
Compare
A DA footprint block limit is introduced to mitigate DA spam and prevent priority fee auctions. By tracking
DA footprint alongside gas, this approach adjusts the block gas limit to account for high estimated DA impact (resulting from estimating a transaction's compressed size, including calldata and other metadata that's part of a transaction's batch data) without altering individual transaction gas mechanics. Preliminary analysis shows minimal impact on most blocks on production networks like Base or OP Mainnet.
Closes ethereum-optimism/optimism#17009.