Standardize DAS data store in the fork-choice store #3993
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Previously we left the DAS data store (blob store in Deneb and data column store in PeerDAS) implementation-dependent. I noticed that that left many issues on the specification.
Currently, a sidecar is verified using only the GossipSub rules
This strictly disallows us to receive sidecars using alternative methods other than GossipSub. This has a bad effect on PeerDAS because it's explicitly stated that you can get a data column sidecar using Req/Resp as well. So. I decided to propose
on_blob_sidecar
andon_data_column_sidecar
to be entry points for respective sidecars and all the verification will be done there before importing them to the store, in addition to the GossipSub rules.You can think of the GossipSub rules as rules to decide whether to forward a message further or not, while the fork-choice handlers, like
on_blob_sidecar
andon_data_column_sidecar
, are used to decide whether to import an object to the store or not.You will probably notice that the verification done in the fork-choice handlers is the same as the one in the GossipSub rules. Yes, they are mostly the same and we did that since the beginning of the beacon chain (look at
on_block
for an example). It's not trivial to refactor this duplicate because the expected result of the fork-choice handlers is ACCEPT or FAIL, while the expected result of the Gossipsub is ACCEPT, IGNORE, or REJECT.If the DAS data store is implementation-dependent, it's hard to make assumptions on what is inside the store and extend the spec further
Let's look at
is_data_available
from both Deneb and PeerDAS.In PeerDAS
is_data_available
, we are doing some verification usingverify_data_column_sidecar
andverify_data_column_sidecar_kzg_proofs
. I'm not sure why we do some verification here because we already did the whole verification using the GossipSub rules which are more comprehensive.It's probably because we assume that
retrieve_column_sidecars
probably will return some invalid sidecars. If that's the case, why do we do only two verification rules instead of doing the whole bunch again.If we assume that
retrieve_column_sidecars
will return only valid sidecars, why bother doing verification at all?The same argument can be applied to
verify_blob_kzg_proof_batch
in Denebis_data_available
as well.So I think the solution is to standardize the data store.
Some notes
verify_blob_kzg_proof_batch
can be completely removed from the Deneb crypto library which is interesting.