-
Notifications
You must be signed in to change notification settings - Fork 982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAS phase 1 #2146
DAS phase 1 #2146
Conversation
…fining the new SSZ class with the new field classes
ec62e15
to
ce6feee
Compare
Co-authored-by: dankrad <dankrad@ethereum.org>
Co-authored-by: dankrad <dankrad@ethereum.org>
Co-authored-by: dankrad <dankrad@ethereum.org>
Co-authored-by: dankrad <dankrad@ethereum.org>
specs/phase1/beacon-chain.md
Outdated
| Name | Value | Unit | Description | | ||
| - | - | - | - | | ||
| `MAX_GASPRICE` | `Gwei(2**24)` (= 16,777,216) | Gwei | Max gasprice charged for an TARGET-sized shard block | | ||
| `MIN_GASPRICE` | `Gwei(2**3)` (= 8) | Gwei | Min gasprice charged for an TARGET-sized shard block | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Denominating in GWei seems coarse. Given that it's a 64 bit integer, it would be easy to increase resolution to Wei. Have you checked the dynamics of this when the gas price is at the lower end of the scale? Gas might become unresponsive/take longer to adjust
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not that coarse! Because it's not gwei per gas, it's gwei for the entire contents of the block. So the lowest possible gasprice is to literally pay (at current prices) $0.00000476 to fill an entire block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just saying that at the lower limit it may behave erratically. Maybe we want a higher lower limit then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just made a patch so that the gasprice delta is minimum 1, so it won't get stuck even at lower levels. Would that work? This is how the EIP 1559 proposal works on the eth1 side, and it seems a good idea to use similar formulas for both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this should pretty safely fix the issue at the low end of the spectrum. Question about the upper limit though -- that seems fairly low, only 0.017 ETH per block. That could very plausibly be breached. Current Eth1 blocks have already brought in several ETH per block.
(I guess this would mean that we have to check that a proposer has enough balance before creating a block)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My current suggestion is to allow up to ~8 ETH gas (meaning ~16 ETH at full size) and block validators with insufficient balance from proposer selection.
specs/phase1/beacon-chain.md
Outdated
c for c in state.previous_epoch_pending_shard_headers if | ||
(c.slot, c.shard, c.confirmed) == (slot, shard, True) | ||
] | ||
if confirmed_candidates: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should the proposer only be charged for confirmed blocks? If they are doing their job reasonably, then they should be able to get their proposal confirmed, so it would be fair to always charge them. After all, an unavailable block might cause as much, if not more work on attesters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question! Need to think through the consequences of this. The status quo in ethereum is certainly "you pay only if you get included", though I can see the arguments for changing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the main consequence is that we introduce a griefing vector by a committee, to conspire to not confirm a shard block in order to charge a proposer the gas fee without a reward.
Second possible problem could be that under condition where you consider it less likely that a block will be confirmed, e.g. when latency is high, it would discourage block production. In fact this threshold could be quite low: if we expect that a block producer makes 10% profit, then any confirmation probability under 90% would discourage block production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Charging for unconfirmed blocks would make equivocation to fill up proposals more expensive (#2146 (comment))
specs/phase1/beacon-chain.md
Outdated
@@ -484,6 +485,10 @@ def process_shard_header(state: BeaconState, | |||
)) | |||
``` | |||
|
|||
The length-and-degree proof works as follows. For a block B with length `l` (so `l` nonzero values in `[0...MAX_SAMPLES_PER_BLOCK-1]`), the length proof is supposed to be `(B / Z) * (r + X**(len(SETUP)-l))`, where `Z` is the minimal polynomial that is zero over `[l...MAX_SAMPLES_PER_BLOCK-1]` (see `SIZE_CHECK_POINTS` above). The goal is to ensure that a proof can only be constructed if (i) `B / Z` is itself non-fractional, meaning that `B` is a multiple of `Z`, and (ii) `deg(B) < MAX_SAMPLES_PER_BLOCK` (the block is not oversized). | |||
|
|||
This is done by making the proof be a random linear combination of `B / Z` and `(B / Z) * (X**(len(SETUP)-l)`. The length proof will have the degree of `(B / Z) * X**(len(SETUP)-l)`, so `deg(B) - (MAX_SAMPLES_PER_BLOCK - l) + len(SETUP) - l`, simplified to `deg(B) - MAX_SAMPLES_PER_BLOCK + len(SETUP)`. Because it's only possible to generate proofs for polynomials with degree `< len(SETUP)`, it's this only possible to generate the proof if this expression is less than `len(SETUP)`, meaning that `deg(B)` must be strictly less than `MAX_SAMPLES_PER_BLOCK`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current code does not implement the random linear combination. But I believe it's not necessary anyway: We can simply make the combined max degree length proof be ( B * X **(len(SETUP) - l) ) / Z
, which has a non-fractional solution only of B
is 0
at the zeroes of Z
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So on the idea of getting rid of the zero check alltogether, and making it a pure degree check: I've got some python code here that interpolates a polynomial from only evaluations on a part of a subgroup:
https://github.com/ethereum/research/blob/master/polynomial_reconstruction/interpolate_polynomial_without_zeroes.py
Excluding the part that can be precomputed, it takes about 1.85s, and FFT in Go seems about 5x faster so an optimized version could do it in around 0.4s. So it feels like that's possible and I would think we should just do it?
This will lead to some savings in the proof computation, so we may well end up even or with some small savings. But we get a much nicer code for length verification which I think is worth it.
Also a nice effect is that the samples become more effective (=better coding rate) for smaller blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[I implemented this as pure degree proofs for now, suggest we discuss the merits in this thread]
Co-authored-by: Radosław Kapka <rkapka@wp.pl> Co-authored-by: dankrad <dankrad@ethereum.org> Co-authored-by: terence tsao <terence@prysmaticlabs.com>
specs/phase1/beacon-chain.md
Outdated
shards = [ | ||
compute_shard_from_committee_index(state, slot, index) | ||
for i in range() | ||
state, | ||
attestation.data.index, | ||
attestation.data.slot | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shards = [ | |
compute_shard_from_committee_index(state, slot, index) | |
for i in range() | |
state, | |
attestation.data.index, | |
attestation.data.slot | |
) |
specs/phase1/beacon-chain.md
Outdated
header.slot, | ||
header.shard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
header.slot, | |
header.shard | |
slot, | |
shard |
specs/phase1/beacon-chain.md
Outdated
# The entire committee (and its balance) | ||
full_committee = get_beacon_committee(state, slot, shard) | ||
full_committee_balance = get_total_balance(state, full_committee) | ||
if True not in [c.confirmed for c in candidates]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would make sense to include an assertion that the list is not empty because this is implied when setting the value of winning_index
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is guaranteed to be one candidate per shard/slot combo due to the initialization that happens in reset_pending_headers
specs/phase1/beacon-chain.md
Outdated
# So how do we incentivise early custody key reveals now? | ||
all_custody_secrets_revealed_epoch: Epoch # to be initialized to FAR_FUTURE_EPOCH | ||
class BeaconBlock(phase0.BeaconBlock): | ||
shard_headers: List[Signed[ShardHeader], MAX_SHARD_HEADERS] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments here:
shard_headers
is not set anywhere, thusassert len(body.shard_headers) <= (...)
will always pass.- Since
BeaconBlockBody
contains a list of attestations, shouldshard_headers
even exist? Shouldn'tBeaconBlockBody.attestations
hold headers within attestations themselves? - Even if we keep
shard_headers
, shouldn't the size ofBeaconBlockBody.attestations
be increased to cater for shards? CurrentlyMAX_ATTESTATIONS = 128
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
body.shard_headers
will be set in the validator guide by the validator constructing the block
4205cf8
to
b3c3c0a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made a round of cleanups and minor fixes in 849837a
|
||
## Updated containers | ||
|
||
The following containers have updated definitions in Phase 1. | ||
|
||
### Extended `AttestationData` | ||
### `AttestationData` | ||
|
||
```python | ||
class AttestationData(Container): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a feeling we won't be able to get away with the inheritance format due to the removal of fields but... we'll see
specs/phase1/beacon-chain.md
Outdated
# So how do we incentivise early custody key reveals now? | ||
all_custody_secrets_revealed_epoch: Epoch # to be initialized to FAR_FUTURE_EPOCH | ||
class BeaconBlock(phase0.BeaconBlock): | ||
shard_headers: List[Signed[ShardHeader], MAX_SHARD_HEADERS] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
body.shard_headers
will be set in the validator guide by the validator constructing the block
specs/phase1/beacon-chain.md
Outdated
candidate_index = indices[compute_shuffled_index(i % total, total, seed)] | ||
random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32] | ||
effective_balance = state.validators[candidate_index].effective_balance | ||
if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte and effective_balance > min_effective_balance: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will never exit if all validators' effective balance is less than min_effective_balance
.
specs/phase1/beacon-chain.md
Outdated
@@ -743,6 +414,9 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: | |||
for_ops(body.attestations, process_attestation) | |||
for_ops(body.deposits, process_deposit) | |||
for_ops(body.voluntary_exits, process_voluntary_exit) | |||
# Limit is dynamic based on active shard count | |||
assert len(body.shard_headers) <= MAX_SHARD_HEADERS_PER_SHARD * get_active_shard_count(state, get_current_epoch(state)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should just fix shard_count in this initial phase and get this check for free from typing
specs/phase1/beacon-chain.md
Outdated
@@ -743,6 +414,9 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None: | |||
for_ops(body.attestations, process_attestation) | |||
for_ops(body.deposits, process_deposit) | |||
for_ops(body.voluntary_exits, process_voluntary_exit) | |||
# Limit is dynamic based on active shard count | |||
assert len(body.shard_headers) <= MAX_SHARD_HEADERS_PER_SHARD * get_active_shard_count(state, get_current_epoch(state)) | |||
for_ops(body.shard_headers, process_shard_header) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to happen prior to process_attestation
calls so that the pending headers for the newly added attestations already exist in state.
specs/phase1/beacon-chain.md
Outdated
# The entire committee (and its balance) | ||
full_committee = get_beacon_committee(state, slot, shard) | ||
full_committee_balance = get_total_balance(state, full_committee) | ||
if True not in [c.confirmed for c in candidates]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is guaranteed to be one candidate per shard/slot combo due to the initialization that happens in reset_pending_headers
b3c3c0a
to
849837a
Compare
) | ||
previous_epoch_start_slot = compute_start_slot_at_epoch(get_previous_epoch(state)) | ||
for slot in range(previous_epoch_start_slot, previous_epoch_start_slot + SLOTS_PER_EPOCH): | ||
for shard in range(SHARD_COUNT): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should SHARD_COUNT
be replaced with the value calculated by get_active_shard_count()
?
winning_index = [c.root for c in candidates].index(Root()) | ||
candidates[winning_index].confirmed = True | ||
for slot_index in range(SLOTS_PER_EPOCH): | ||
for shard in range(SHARD_COUNT): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should SHARD_COUNT
be replaced with the value calculated by get_active_shard_count()
?
previous_epoch_pending_shard_headers: List[PendingShardHeader, MAX_SHARD_HEADERS * SLOTS_PER_EPOCH] | ||
current_epoch_pending_shard_headers: List[PendingShardHeader, MAX_SHARD_HEADERS * SLOTS_PER_EPOCH] | ||
grandparent_epoch_confirmed_commitments: Vector[Vector[DataCommitment, SLOTS_PER_EPOCH], MAX_SHARDS] | ||
shard_gasprice: uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using Gwei
instead of uint64
is clearer?
Included this in above PR, we refactored the phase 1 spec into modules. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about the block
Puts data availability sampling based on the ideas here into a pull request.