-
Notifications
You must be signed in to change notification settings - Fork 997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EIP-7594: PeerDAS open questions #3652
Comments
Big yes, but note a similar gadget is required by ILs in their current design
They don't appear necessary as the proposer should distribute columns directly.
Useful for column custodians to fetch all columns for given subnet and epoch, like we do now for blobs |
Imho we should avoid having the whole validator set operating in an optimistic setting, even if we were to ignore implementation complexity and just worry about consensus security. One attack that this enables is:
This can perhaps be fixed by requiring the attesters to have their sampling done by 10s into the previous slot, while the proposer has a bit more time. More complexity, more timing assumptions. Also, this is just one attack, and it's not clear what the entire attack surface looks like. There is a clear solution: the custody requirement needs to be high enough to provide strong guarantees even before we get to sampling (see here as well). High enough here means somewhere between 4 and 8, depending on the adversarial model we want to work with. With that, an attacker that does not control a lot of validators would fail at accruing many votes for a < 50% available block, and so it would be easily reorgable through proposer boost. Some related things to keep in mind:
Imo it makes a lot of sense to move from 4844 to PeerDAS gradually. We can do this not only by slowly increasing the blob count, but also by slowly decreasing the minimum proportion of data custodied by each node, i.e., the
I don't see why we would want more than 16, or even 16 - |
Is it worth also increasing the With a target peer count of 70, and each peer subscribing to one subnet (out of 32), a healthy target peer count per subnet would be ~2 on average. This would potentially impact the ability for proposer to disseminate data columns to all 32 subnets successfully, and could potentially lead to data loss - assuming proposer isn't custodying all columns - we could potentially make an exception for proposer to custody all columns, but feels like it would be cleaner to just make sure we disseminate the samples reliably. Although if we increase |
We really shouldn't keep the |
Something that I think should be added to the open questions is validator custody: should validators have their own custody assignment, at the very least when they're voting, if not even in every slot? This has two benefits:
Just as an example, we could set cc @adietrichs |
I have my LossyDAS for PeerDAS notebook here: Of course it also covers the 0 losses allowed case. |
I see the following in the spec: What are we trying to address with this? If it remains in the spec, I think there should also be a mechanism (or recommendations) to return back to original values. |
Regarding
For the sampling, peer count is important, because the mechanism to sample fast from nodes that are not peers is not yet there, os I see this driving
|
Context
General background for PeerDAS design and goals:
https://ethresear.ch/t/peerdas-a-simpler-das-approach-using-battle-tested-p2p-components/16541
https://ethresear.ch/t/from-4844-to-danksharding-a-path-to-scaling-ethereum-da/18046
Open questions
Parameterization
Determine final parameters for a robust and secure network.
SAMPLES_PER_SLOT
to hit the security level we want?MAX_REQUEST_DATA_COLUMN_SIDECARS
as a function ofMAX_REQUEST_BLOCKS
andNUMBER_OF_COLUMNS
: [WIP] EIP-7594: PeerDAS protocol #3574 (comment)CUSTODY_REQUIREMENT
actually be? See thread: [WIP] EIP-7594: PeerDAS protocol #3574 (comment)Availability look-behind
One particular parameter is how tight the sampling has to be with respect to block/blob processing and fork choice. For example, nodes could sample in the same slot as a block and not consider a block valid until the sampling completes. In the event this requirement is too strict (e.g. because of network performance), we could relax the requirement to only complete sampling within some number of trailing slots from the head. If we go with a trailing approach, are there additional complications in the regime of long-range forks or network partitions? Does working in this "optimistic" setting cause undue complexity in implementations?
Syncing
Some questions around syncing relating to PeerDAS and also the possible deprecation of EIP-4844 style sampling.
Deprecate
blob_sidecars_by_root
andblob_sidecars_by_range
?Can we deprecate these RPC methods? Note you would still sample anything inside the blob retention window.
DataColumnSidecarsByRoot
andDataColumnSidecarsByRange
Currently missing a method for
ByRange
. Required for syncing in the regime where clients are expected to retain samples.What is the exact layout of the RPC method? Multiple columns or just one? See thread: #3574 (comment)
Peer scoring
How to downscore a peer who should custody some sample but can’t respond with it?
Network shards design
See here for more context on the proposal: #3623
Likely a good simplification. Would touch some of the PeerDAS details around mapping a given peer to their sample subnets.
Some additional implications: #3574 (comment)
Subnet design
Map one column per subnet, unless we need to do otherwise, see #3574 (comment)
ENR semantics
#3574 (comment)
Spec refactoring
Misc. refactoring to align with the general spec style:
#3574 (comment)
#3574 (comment)
#3574 (comment)
Ensure all comments with references to Deneb or 4844 are now EIP-7594
#3574 (comment)
#3574 (comment)
The text was updated successfully, but these errors were encountered: