Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PeerDAS implementation #5683

Merged
merged 105 commits into from
Aug 27, 2024
Merged
Show file tree
Hide file tree
Changes from 100 commits
Commits
Show all changes
105 commits
Select commit Hold shift + click to select a range
4e7c0d1
1D PeerDAS prototype: Data format and Distribution (#5050)
jimmygchen Feb 5, 2024
79fa78f
Merge branch 'unstable' into das
jimmygchen Feb 6, 2024
e740e78
Merge branch 'unstable' into das
jimmygchen Feb 8, 2024
28756da
Merge branch 'unstable' into das
jimmygchen Feb 20, 2024
ae470d7
Add `DataColumnSidecarsByRoot ` req/resp protocol (#5196)
jimmygchen Mar 4, 2024
ab427a0
feat: add DAS KZG in data col construction (#5210)
jacobkaufmann Mar 12, 2024
14a85fe
Merge branch 'unstable' into das
jimmygchen Mar 12, 2024
1317f70
fix: update data col subnet count from 64 to 32 (#5413)
jacobkaufmann Mar 14, 2024
7108c74
feat: add peerdas custody field to ENR (#5409)
jacobkaufmann Apr 9, 2024
d5e7e73
Merge branch 'unstable' into das
jimmygchen Apr 9, 2024
41d6225
Merge branch 'unstable' into das-unstable-merge-0415
jimmygchen Apr 15, 2024
5750d49
Merge remote-tracking branch 'sigp/unstable' into das
dapplion Apr 18, 2024
254bb6e
Merge remote-tracking branch 'sigp/unstable' into das
dapplion Apr 24, 2024
d71dc58
Fix merge conflicts.
jimmygchen Apr 24, 2024
c5bab04
Send custody data column to `DataAvailabilityChecker` for determining…
jimmygchen Apr 24, 2024
75eab79
DAS sampling on sync (#5616)
dapplion Apr 29, 2024
6dec0d1
Merge branch 'unstable' into das
jimmygchen Apr 29, 2024
cb11a16
Merge branch 'unstable' into das
jimmygchen Apr 30, 2024
c85c205
Re-process early sampling requests (#5569)
dapplion Apr 30, 2024
2c82fad
Merge remote-tracking branch 'sigp/unstable' into das
dapplion May 1, 2024
d5f3562
Fix merge conflict.
jimmygchen May 2, 2024
0644709
Add data columns by root to currently supported protocol list (#5678)
jimmygchen May 2, 2024
a857546
Merge branch 'unstable' into das
jimmygchen May 7, 2024
bc190e7
Fix simulator tests on `das` branch (#5731)
jimmygchen May 8, 2024
bc51e70
DataColumnByRange boilerplate (#5353)
eserilev May 9, 2024
a970f64
PeerDAS custody lookup sync (#5684)
dapplion May 9, 2024
42d97d3
Add data column kzg verification and update `c-kzg`. (#5701)
jimmygchen May 9, 2024
fe9e5dd
Rename `PEER_DAS_EPOCH` to `EIP7594_FORK_EPOCH` for client interop. (…
jimmygchen May 9, 2024
09d217c
Fetch custody columns in range sync (#5747)
dapplion May 10, 2024
9f495e7
Remove `BlobSidecar` construction and publish after PeerDAS activated…
jimmygchen May 10, 2024
c8ea589
#5684 review comments (#5748)
jimmygchen May 10, 2024
500915f
Make sampling tests deterministic (#5775)
dapplion May 13, 2024
4957347
PeerDAS spec tests (#5772)
jimmygchen May 14, 2024
5570633
Merge remote branch 'sigp/unstable' into das
dapplion May 14, 2024
c356c2e
Merge remote tracking branch 'origin/unstable' into das
dapplion May 14, 2024
562e9d0
Implement unconditional reconstruction for supernodes (#5781)
dapplion May 15, 2024
178253a
Add withhold attack mode for interop (#5788)
dapplion May 15, 2024
4332207
Add column gossip verification and handle unknown parent block (#5783)
jimmygchen May 15, 2024
07df74c
Trigger sampling on sync events (#5776)
dapplion May 15, 2024
aa80950
PeerDAS parameter changes for devnet-0 (#5779)
jimmygchen May 15, 2024
6f7e4e9
Update hardcoded subnet count to 64 (#5791)
dapplion May 15, 2024
b64dd9d
Fix incorrect columns per subnet and config cleanup (#5792)
jimmygchen May 15, 2024
80892e6
Fix DAS branch CI (#5793)
jimmygchen May 15, 2024
163e17f
Only attempt reconstruct columns once. (#5794)
jimmygchen May 15, 2024
500b828
Re-enable precompute table for peerdas kzg (#5795)
dapplion May 16, 2024
7c3c173
Merge branch 'unstable' into das
dapplion May 16, 2024
4946e72
Update subscription filter. (#5797)
jimmygchen May 16, 2024
71520c9
Remove penalty for duplicate columns (expected due to reconstruction)…
jimmygchen May 16, 2024
c98bb52
Revert DAS config for interop testing. Optimise get_custody_columns f…
jimmygchen May 16, 2024
a88ca3c
Don't perform reconstruction for proposer node as it already has all …
jimmygchen May 17, 2024
8059c3a
Multithread compute_cells_and_proofs (#5805)
dapplion May 17, 2024
e6f17d3
Merge branch 'unstable' into das
jimmygchen May 17, 2024
bebcabe
Fix CI errors.
jimmygchen May 17, 2024
6965498
Move PeerDAS type-level config to configurable `ChainSpec` (#5828)
jimmygchen May 23, 2024
656cd8d
Misc custody lookup improvements (#5821)
dapplion May 24, 2024
31c98d9
Merge branch 'unstable' into das
jimmygchen May 28, 2024
4e063d5
Rename deploy_block in network config (`das` branch) (#5852)
jimmygchen May 28, 2024
5ba71bc
Merge branch 'unstable' into das
dapplion Jun 3, 2024
b42249f
Fix CI and merge issues.
jimmygchen Jun 4, 2024
1b71ec6
Merge branch 'unstable' into das
jimmygchen Jun 4, 2024
dac580b
Store data columns individually in store and caches (#5890)
dapplion Jun 18, 2024
c843ede
Merge branch 'unstable' into das
jimmygchen Jun 18, 2024
c5a5c0e
Update reconstruction benches to newer criterion version. (#5949)
jimmygchen Jun 18, 2024
14c0d3b
Merge branch 'unstable' into das
jimmygchen Jun 19, 2024
6ff9480
chore: add `recover_cells_and_compute_proofs` method (#5938)
kevaundray Jun 19, 2024
733b1df
Update `csc` format in ENR and spec tests for devnet-1 (#5966)
jimmygchen Jun 25, 2024
5c0ccef
Fix csc encoding and decoding (#5997)
jimmygchen Jun 26, 2024
7206909
Fix data column rpc request not being sent due to incorrect limits se…
jimmygchen Jun 26, 2024
515382e
Fix incorrect inbound request count causing rate limiting. (#6025)
jimmygchen Jul 2, 2024
4a6fcde
Merge branch 'stable' into das
jimmygchen Jul 2, 2024
ba8126e
Merge remote-tracking branch 'unstable' into das
dapplion Jul 5, 2024
6a3f88f
Add kurtosis config for DAS testing (#5968)
jimmygchen Jul 8, 2024
094ee60
chore: add rust PeerdasKZG crypto library for peerdas functionality a…
kevaundray Jul 9, 2024
bf300b3
Update PeerDAS interop testnet config (#6069)
jimmygchen Jul 11, 2024
018f382
Avoid retrying same sampling peer that previously failed. (#6084)
jimmygchen Jul 16, 2024
55a3be7
Various fixes to custody range sync (#6004)
jimmygchen Jul 16, 2024
04d9eef
chore: update peerdas-kzg library (#6118)
kevaundray Jul 17, 2024
e1f8909
Prevent continuous searchers for low-peer networks (#6162)
AgeManning Jul 25, 2024
14c7302
Merge branch 'unstable' into das
dapplion Jul 31, 2024
b148c4b
Fix merge conflicts
dapplion Jul 31, 2024
37dd0ea
Add cli flag to enable sampling and disable by default. (#6209)
jimmygchen Jul 31, 2024
8c78010
chore: Use reference to an array representing a blob instead of an ow…
kevaundray Aug 1, 2024
90700fe
Store computed custody subnets in PeerDB and fix custody lookup test …
jimmygchen Aug 8, 2024
697498a
Merge branch 'unstable' into das
jimmygchen Aug 12, 2024
b638019
Fix CI failures after merge.
jimmygchen Aug 12, 2024
7ee3780
Batch sampling requests by peer (#6256)
ackintosh Aug 19, 2024
06e34c2
Fix range sync never evaluating request as finished, causing it to ge…
jimmygchen Aug 19, 2024
2aafe30
Merge branch 'unstable' into das-0821-merge
jimmygchen Aug 21, 2024
4e19984
Fix custody tests and load PeerDAS KZG instead.
jimmygchen Aug 21, 2024
15d0f15
Fix ef tests and bench compilation.
jimmygchen Aug 21, 2024
481ebc1
Fix failing sampling test.
jimmygchen Aug 21, 2024
e05d7ba
Merge pull request #6287 from jimmygchen/das-0821-merge
jimmygchen Aug 21, 2024
ea331a9
Remove get_block_import_status
michaelsproul Aug 22, 2024
888c607
Merge branch 'unstable' into das
jimmygchen Aug 22, 2024
488a083
Re-enable Windows release tests.
jimmygchen Aug 22, 2024
d588661
Address some review comments.
jimmygchen Aug 22, 2024
93329c0
Address more review comments and cleanups.
jimmygchen Aug 22, 2024
e049294
Comment out peer DAS KZG EF tests for now
michaelsproul Aug 22, 2024
c44bc0a
Address more review comments and fix build.
jimmygchen Aug 22, 2024
3c4b7f9
Merge branch 'das' of github.com:sigp/lighthouse into das
jimmygchen Aug 22, 2024
45b3203
Unignore Electra tests
michaelsproul Aug 23, 2024
63a39b7
Fix metric name
michaelsproul Aug 23, 2024
77caa97
Address some of Pawan's review comments
michaelsproul Aug 23, 2024
7db3e1c
Merge remote-tracking branch 'origin/unstable' into das
michaelsproul Aug 27, 2024
2c16052
Update PeerDAS network parameters for peerdas-devnet-2 (#6290)
eserilev Aug 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions beacon_node/beacon_chain/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com
edition = { workspace = true }
autotests = false # using a single test binary compiles faster

[[bench]]
name = "benches"
harness = false

[features]
default = ["participation_metrics"]
write_ssz_files = [] # Writes debugging .ssz files to /tmp during block processing.
Expand All @@ -16,6 +20,7 @@ test_backfill = []
[dev-dependencies]
maplit = { workspace = true }
serde_json = { workspace = true }
criterion = { workspace = true }

[dependencies]
bitvec = { workspace = true }
Expand Down
66 changes: 66 additions & 0 deletions beacon_node/beacon_chain/benches/benches.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
use std::sync::Arc;

use beacon_chain::kzg_utils::{blobs_to_data_column_sidecars, reconstruct_data_columns};
use criterion::{black_box, criterion_group, criterion_main, Criterion};

use bls::Signature;
use eth2_network_config::TRUSTED_SETUP_BYTES;
use kzg::{Kzg, KzgCommitment, TrustedSetup};
use types::{
beacon_block_body::KzgCommitments, BeaconBlock, BeaconBlockDeneb, Blob, BlobsList, ChainSpec,
EmptyBlock, EthSpec, MainnetEthSpec, SignedBeaconBlock,
};

fn create_test_block_and_blobs<E: EthSpec>(
num_of_blobs: usize,
spec: &ChainSpec,
) -> (SignedBeaconBlock<E>, BlobsList<E>) {
let mut block = BeaconBlock::Deneb(BeaconBlockDeneb::empty(spec));
let mut body = block.body_mut();
let blob_kzg_commitments = body.blob_kzg_commitments_mut().unwrap();
*blob_kzg_commitments =
KzgCommitments::<E>::new(vec![KzgCommitment::empty_for_testing(); num_of_blobs]).unwrap();

let signed_block = SignedBeaconBlock::from_block(block, Signature::empty());

let blobs = (0..num_of_blobs)
.map(|_| Blob::<E>::default())
.collect::<Vec<_>>()
.into();

(signed_block, blobs)
}

fn all_benches(c: &mut Criterion) {
type E = MainnetEthSpec;
let spec = Arc::new(E::default_spec());

let trusted_setup: TrustedSetup = serde_json::from_reader(TRUSTED_SETUP_BYTES)
.map_err(|e| format!("Unable to read trusted setup file: {}", e))
.expect("should have trusted setup");
let kzg = Arc::new(Kzg::new_from_trusted_setup(trusted_setup).expect("should create kzg"));

for blob_count in [1, 2, 3, 6] {
let kzg = kzg.clone();
let (signed_block, blob_sidecars) = create_test_block_and_blobs::<E>(blob_count, &spec);

let column_sidecars =
blobs_to_data_column_sidecars(&blob_sidecars, &signed_block, &kzg.clone(), &spec)
.unwrap();

let spec = spec.clone();

c.bench_function(&format!("reconstruct_{}", blob_count), |b| {
b.iter(|| {
black_box(reconstruct_data_columns(
&kzg,
&column_sidecars.iter().as_slice()[0..column_sidecars.len() / 2],
spec.as_ref(),
))
})
});
}
}

criterion_group!(benches, all_benches);
criterion_main!(benches);
117 changes: 89 additions & 28 deletions beacon_node/beacon_chain/src/beacon_chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ pub use crate::canonical_head::CanonicalHead;
use crate::chain_config::ChainConfig;
use crate::data_availability_checker::{
Availability, AvailabilityCheckError, AvailableBlock, DataAvailabilityChecker,
DataColumnsToPublish,
};
use crate::data_column_verification::{GossipDataColumnError, GossipVerifiedDataColumn};
use crate::early_attester_cache::EarlyAttesterCache;
Expand Down Expand Up @@ -123,6 +124,7 @@ use task_executor::{ShutdownReason, TaskExecutor};
use tokio_stream::Stream;
use tree_hash::TreeHash;
use types::blob_sidecar::FixedBlobSidecarList;
use types::data_column_sidecar::{ColumnIndex, DataColumnIdentifier};
use types::payload::BlockProductionVersion;
use types::*;

Expand Down Expand Up @@ -206,11 +208,13 @@ impl TryInto<Hash256> for AvailabilityProcessingStatus {
/// The result of a chain segment processing.
pub enum ChainSegmentResult<E: EthSpec> {
/// Processing this chain segment finished successfully.
Successful { imported_blocks: usize },
Successful {
imported_blocks: Vec<(Hash256, Slot)>,
},
/// There was an error processing this chain segment. Before the error, some blocks could
/// have been imported.
Failed {
imported_blocks: usize,
imported_blocks: Vec<(Hash256, Slot)>,
error: BlockError<E>,
},
}
Expand Down Expand Up @@ -2696,7 +2700,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
chain_segment: Vec<RpcBlock<T::EthSpec>>,
) -> Result<Vec<HashBlockTuple<T::EthSpec>>, ChainSegmentResult<T::EthSpec>> {
// This function will never import any blocks.
let imported_blocks = 0;
let imported_blocks = vec![];
let mut filtered_chain_segment = Vec::with_capacity(chain_segment.len());

// Produce a list of the parent root and slot of the child of each block.
Expand Down Expand Up @@ -2802,7 +2806,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
chain_segment: Vec<RpcBlock<T::EthSpec>>,
notify_execution_layer: NotifyExecutionLayer,
) -> ChainSegmentResult<T::EthSpec> {
let mut imported_blocks = 0;
let mut imported_blocks = vec![];

// Filter uninteresting blocks from the chain segment in a blocking task.
let chain = self.clone();
Expand Down Expand Up @@ -2862,6 +2866,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {

// Import the blocks into the chain.
for signature_verified_block in signature_verified_blocks {
let block_slot = signature_verified_block.slot();
match self
.process_block(
signature_verified_block.block_root(),
Expand All @@ -2874,9 +2879,9 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
{
Ok(status) => {
match status {
AvailabilityProcessingStatus::Imported(_) => {
AvailabilityProcessingStatus::Imported(block_root) => {
// The block was imported successfully.
imported_blocks += 1;
imported_blocks.push((block_root, block_slot));
}
AvailabilityProcessingStatus::MissingComponents(slot, block_root) => {
warn!(self.log, "Blobs missing in response to range request";
Expand Down Expand Up @@ -2909,6 +2914,17 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
ChainSegmentResult::Successful { imported_blocks }
}

/// Updates fork-choice node into a permanent `available` state so it can become a viable head.
/// Only completed sampling results are received. Blocks are unavailable by default and should
/// be pruned on finalization, on a timeout or by a max count.
pub async fn process_sampling_completed(self: &Arc<Self>, block_root: Hash256) {
// TODO(das): update fork-choice
// NOTE: It is possible that sampling complets before block is imported into fork choice,
// in that case we may need to update availability cache.
// TODO(das): These log levels are too high, reduce once DAS matures
info!(self.log, "Sampling completed"; "block_root" => %block_root);
}

/// Returns `Ok(GossipVerifiedBlock)` if the supplied `block` should be forwarded onto the
/// gossip network. The block is not imported into the chain, it is just partially verified.
///
Expand Down Expand Up @@ -2983,6 +2999,11 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
return Err(BlockError::BlockIsAlreadyKnown(blob.block_root()));
}

// No need to process and import blobs beyond the PeerDAS epoch.
if self.spec.is_peer_das_enabled_for_epoch(blob.epoch()) {
return Err(BlockError::BlobNotRequired(blob.slot()));
}

if let Some(event_handler) = self.event_handler.as_ref() {
if event_handler.has_blob_sidecar_subscribers() {
event_handler.register(EventKind::BlobSidecar(SseBlobSidecar::from_blob_sidecar(
Expand All @@ -3000,7 +3021,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
pub async fn process_gossip_data_columns(
self: &Arc<Self>,
data_columns: Vec<GossipVerifiedDataColumn<T>>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
let Ok((slot, block_root)) = data_columns
.iter()
.map(|c| (c.slot(), c.block_root()))
Expand Down Expand Up @@ -3067,7 +3094,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
pub async fn process_rpc_custody_columns(
self: &Arc<Self>,
custody_columns: DataColumnSidecarList<T::EthSpec>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
let Ok((slot, block_root)) = custody_columns
.iter()
.map(|c| (c.slot(), c.block_root()))
Expand All @@ -3094,7 +3127,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let r = self
.check_rpc_custody_columns_availability_and_import(slot, block_root, custody_columns)
.await;
self.remove_notified(&block_root, r)
self.remove_notified_custody_columns(&block_root, r)
}

/// Remove any block components from the *processing cache* if we no longer require them. If the
Expand All @@ -3114,13 +3147,15 @@ impl<T: BeaconChainTypes> BeaconChain<T> {

/// Remove any block components from the *processing cache* if we no longer require them. If the
/// block was imported full or erred, we no longer require them.
fn remove_notified_custody_columns(
fn remove_notified_custody_columns<P>(
&self,
block_root: &Hash256,
r: Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
let has_missing_components =
matches!(r, Ok(AvailabilityProcessingStatus::MissingComponents(_, _)));
r: Result<(AvailabilityProcessingStatus, P), BlockError<T::EthSpec>>,
) -> Result<(AvailabilityProcessingStatus, P), BlockError<T::EthSpec>> {
let has_missing_components = matches!(
r,
Ok((AvailabilityProcessingStatus::MissingComponents(_, _), _))
);
if !has_missing_components {
self.reqresp_pre_import_cache.write().remove(block_root);
}
Expand Down Expand Up @@ -3378,20 +3413,26 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
slot: Slot,
block_root: Hash256,
data_columns: Vec<GossipVerifiedDataColumn<T>>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
if let Some(slasher) = self.slasher.as_ref() {
for data_colum in &data_columns {
slasher.accept_block_header(data_colum.signed_block_header());
}
}

let availability = self.data_availability_checker.put_gossip_data_columns(
slot,
block_root,
data_columns,
)?;
let (availability, data_columns_to_publish) = self
.data_availability_checker
.put_gossip_data_columns(slot, block_root, data_columns)?;

self.process_availability(slot, availability).await
self.process_availability(slot, availability)
.await
.map(|result| (result, data_columns_to_publish))
}

/// Checks if the provided blobs can make any cached blocks available, and imports immediately
Expand Down Expand Up @@ -3440,7 +3481,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
slot: Slot,
block_root: Hash256,
custody_columns: DataColumnSidecarList<T::EthSpec>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
// Need to scope this to ensure the lock is dropped before calling `process_availability`
// Even an explicit drop is not enough to convince the borrow checker.
{
Expand All @@ -3465,13 +3512,16 @@ impl<T: BeaconChainTypes> BeaconChain<T> {

// This slot value is purely informative for the consumers of
// `AvailabilityProcessingStatus::MissingComponents` to log an error with a slot.
let availability = self.data_availability_checker.put_rpc_custody_columns(
block_root,
slot.epoch(T::EthSpec::slots_per_epoch()),
custody_columns,
)?;
let (availability, data_columns_to_publish) =
self.data_availability_checker.put_rpc_custody_columns(
block_root,
slot.epoch(T::EthSpec::slots_per_epoch()),
custody_columns,
)?;

self.process_availability(slot, availability).await
self.process_availability(slot, availability)
.await
.map(|result| (result, data_columns_to_publish))
}

/// Imports a fully available block. Otherwise, returns `AvailabilityProcessingStatus::MissingComponents`
Expand Down Expand Up @@ -3522,6 +3572,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
);
}

// TODO(das) record custody column available timestamp

// import
let chain = self.clone();
let block_root = self
Expand Down Expand Up @@ -6895,6 +6947,15 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&& self.spec.is_peer_das_enabled_for_epoch(block_epoch)
}

/// Returns true if we should issue a sampling request for this block
/// TODO(das): check if the block is still within the da_window
pub fn should_sample_slot(&self, slot: Slot) -> bool {
self.config.enable_sampling
&& self
.spec
.is_peer_das_enabled_for_epoch(slot.epoch(T::EthSpec::slots_per_epoch()))
}

pub fn logger(&self) -> &Logger {
&self.log
}
Expand Down
4 changes: 2 additions & 2 deletions beacon_node/beacon_chain/src/blob_verification.rs
Original file line number Diff line number Diff line change
Expand Up @@ -409,8 +409,8 @@ pub fn validate_blob_sidecar_for_gossip<T: BeaconChainTypes>(
// Verify that the blob_sidecar was received on the correct subnet.
if blob_index != subnet {
return Err(GossipBlobError::InvalidSubnet {
expected: blob_index,
received: subnet,
expected: subnet,
michaelsproul marked this conversation as resolved.
Show resolved Hide resolved
received: blob_index,
});
}

Expand Down
Loading
Loading