Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We've been having a conversation about the reorg event
depth
in sigp/lighthouse#2090.The problem is that computing the re-org distance involves finding the highest common ancestor of two blocks. This is a notoriously difficult problem in blockchains which is resistant to optimization and typically involves an
O(n)
walk back through the chain.In Eth2, computing that re-org distance is quite easy if the re-org is not deeper than
SLOTS_PER_HISTORICAL_ROOT
(8,192
) blocks, since clients probably have all those block roots sitting in memory (they're in theBeaconState
). However, once we go past 8,192 we're entering the territory of readingBeaconState
s from the database or doing block-by-block reads from the DB.My concern is that if the chain is very unhealthy and the head is bouncing around then the BN is committed to constantly computing the correct re-org depth for consumers of the event stream. In this PR I've explicitly added the ability for nodes to return
null
and save themselves from entering thisO(n)
search.