Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Same block getting imported multiple times (PeerDAS) #6439

Open
jimmygchen opened this issue Sep 27, 2024 · 1 comment
Open

Same block getting imported multiple times (PeerDAS) #6439

jimmygchen opened this issue Sep 27, 2024 · 1 comment
Labels
bug Something isn't working das Data Availability Sampling

Comments

@jimmygchen
Copy link
Member

Description

On peer-das-devnet-2, I'm seeing the same block getting written to the database multiple times (4 times within 2 ms!). This occurred on a supernode.

I think this is because we keep the PendingComponents in the cache until we complete block import, so for each gossip component that made it through to the DA checker after the block is available, it would trigger another import:

// Remove block components from da_checker AFTER completing block import. Then we can assert
// the following invariant:
// > A valid unfinalized block is either in fork-choice or da_checker.
//
// If we remove the block when it becomes available, there's some time window during
// `import_block` where the block is nowhere. Consumers of the da_checker can handle the
// extend time a block may exist in the da_checker.
//
// If `import_block` errors (only errors with internal errors), the pending components will
// be pruned on data_availability_checker maintenance as finality advances.
self.data_availability_checker
.remove_pending_components(block_root);

With PeerDAS supernodes, once we receive 64 columns, the remaining columns can come from reconstruction and block can be made available after this. However, we can still get columns from gossip (as they haven't been seen via gossip), and each of these gossip columns could trigger a block import, as the PendingComponent in the cache is "complete".

The change to keep PendingComponent in the DA cache was intentional and was made in the following PR to address an issue with sync lookup:
#5845

I'm not sure if there's a better way, but if we do need to keep the block in the DA Checker during import, perhaps we can do a check before trying to process a gossip block/blob/data_column - if block has already been made available, just return instead of processing and re-importing.

@jimmygchen jimmygchen added bug Something isn't working das Data Availability Sampling labels Sep 27, 2024
@jimmygchen jimmygchen changed the title Same block getting imported multiple times Same block getting imported multiple times (PeerDAS) Sep 27, 2024
@jimmygchen
Copy link
Member Author

Logs from Sep 25 on lighthouse-geth-3 on peerdas-devnet-2:

  • 05:29:05.122: received block 0x647586a23cd593120c0277581cbf829294821abdd6f8306564b78351be4961d4
  • 05:29:05.929: received 67 data columns
  • 05:29:06.476: writing data_columns to store (block import), data column reconstruction must have occurred for this to happen (on the 64th data column recieved)
  • 05:29:06.482: another writing data_columns to store - this should be from the 65th gossip data columns
  • 05:29:06.489: Gossipsub data column processed, imported fully available block - this proved the above write is triggered from gossip column
  • 05:29:06.489: Sending pubsub messages.. this indicates reconstruction is complete and publish a bunch of reconstructed data columns
  • 05:29:06.489: Writing data_columns to store: 3rd write, triggered from the 66th column recieved
  • 05:29:06.506: Writing data_columns to store: 4th write, triggered from the 67th column recieved
  • 05:29:06.517: Seeing the remaining columns recieved from gossip but ignored due to DuplicateFullyImported

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working das Data Availability Sampling
Projects
None yet
Development

No branches or pull requests

1 participant