-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate store in blockchain pkg #10903
Conversation
- TestFinalizedCheckpt_GenesisRootOk - TestCurrentJustifiedCheckpt_CanRetrieve - TestJustifiedCheckpt_GenesisRootOk - TestHeadRoot_CanRetrieve - TestHeadRoot_UseDB - TestService_ProcessAttestationsAndUpdateHead - TestService_VerifyWeakSubjectivityRoot - TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray - TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree - TestVerifyFinalizedConsistency_Ok - TestStore_OnBlock_ProtoArray - TestStore_OnBlock_DoublyLinkedTree - TestStore_OnBlockBatch_ProtoArray - TestStore_OnBlockBatch_DoublyLinkedTree - TestStore_OnBlockBatch_NotifyNewPayload - TestCachedPreState_CanGetFromStateSummary_ProtoArray - TestCachedPreState_CanGetFromStateSummary_DoublyLinkedTree
- TestStore_OnBlockBatch_PruneOK_Protoarray - TestFillForkChoiceMissingBlocks_CanSave_ProtoArray - TestFillForkChoiceMissingBlocks_CanSave_DoublyLinkedTree - TestFillForkChoiceMissingBlocks_RootsMatch_Protoarray - TestFillForkChoiceMissingBlocks_RootsMatch_DoublyLinkedTree - TestFillForkChoiceMissingBlocks_FilterFinalized_ProtoArray - TestFillForkChoiceMissingBlocks_FilterFinalized_DoublyLinkedTree - TestVerifyBlkDescendant - Test_verifyBlkFinalizedSlot_invalidBlock - TestChainService_SaveHeadNoDB
FinalizedCheckpt() (*ethpb.Checkpoint, error) | ||
CurrentJustifiedCheckpt() (*ethpb.Checkpoint, error) | ||
PreviousJustifiedCheckpt() (*ethpb.Checkpoint, error) | ||
FinalizedCheckpt() *ethpb.Checkpoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will move all these to *forkchoicetypes.Checkpoint
in a future PR
func (s *Service) saveInitSyncBlock(r [32]byte, b interfaces.SignedBeaconBlock) { | ||
// This saves a beacon block to the initial sync blocks cache. It rate limits how many blocks | ||
// the cache keeps in memory (2 epochs worth of blocks) and saves them to DB when it hits this limit. | ||
func (s *Service) saveInitSyncBlock(ctx context.Context, r [32]byte, b interfaces.SignedBeaconBlock) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic of this function is changed: it prunes the blocks as they are being saved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a violation of the single responsibility principle, any reason we want to couple them together like this ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because every single call to this function came with a direct call afterward to prune, since we want to keep the cache to a fixed size, then here's the best point to prune immediately. But I don't feel strongly about this, I can revert back to the previous behavior.
@@ -178,52 +182,8 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo | |||
}() | |||
} | |||
|
|||
// Update justified check point. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not need to update justification any more on store as this was already taken care by inserting the block to forkchoice.
|
||
balances, err := s.justifiedBalances.get(ctx, bytesutil.ToBytes32(justified.Root)) | ||
justified := s.ForkChoicer().JustifiedCheckpoint() | ||
balances, err := s.justifiedBalances.get(ctx, justified.Root) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are studying moving the justified balance tracking to forkchoice in a future PR
tracing.AnnotateError(span, err) | ||
return err | ||
} | ||
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, ðpb.StateSummary{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is all that's left of handleBlockAfterBatchVerify
so it's better to inline it here.
// This caches input checkpoint as justified for the service struct. It rotates current justified to previous justified, | ||
// caches justified checkpoint balances for fork choice and save justified checkpoint in DB. | ||
// This method does not have defense against fork choice bouncing attack, which is why it's only recommend to be used during initial syncing. | ||
func (s *Service) updateJustifiedInitSync(ctx context.Context, cp *ethpb.Checkpoint) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is deprecated and removed since anyway when we insert nodes, batched or not, we will be updating the justification info. Bouncing attack is irrelevant here since the blocks come in batches.
bState := st.Copy() | ||
|
||
var blks []interfaces.SignedBeaconBlock | ||
var blkRoots [][32]byte | ||
var firstState state.BeaconState | ||
for i := 1; i < 128; i++ { | ||
for i := 0; i < 320; i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is because forkchoice prunes automatically now, so we insert the minimal number of nodes so that it triggers a finalization above the treshold (which is 256 blocks, we need 64 more to account for the justified and current epochs).
require.NoError(t, err) | ||
} | ||
|
||
func TestRemoveStateSinceLastFinalized_EmptyStartSlot(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These next three tests are irrelevant now since the update logic is now tested in forkchoice.
err = service.fillInForkChoiceMissingBlocks( | ||
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint()) | ||
require.NoError(t, err) | ||
|
||
// 4 nodes from the block tree 1. B3 - B4 - B6 - B8 | ||
assert.Equal(t, 4, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes") | ||
// plus the origin block root | ||
assert.Equal(t, 5, service.cfg.ForkChoiceStore.NodeCount(), "Miss match nodes") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason this number is different is because the node tree in the test setup is botched starting with a block at slot 0. I am adding artificially the genesis block to forkchoice above so that the chain being added is a descendant of it, which is now enforced in forkchoice.
@@ -1436,15 +1108,15 @@ func TestVerifyBlkDescendant(t *testing.T) { | |||
WithForkChoiceStore(fcs), | |||
} | |||
b := util.NewBeaconBlock() | |||
b.Block.Slot = 1 | |||
b.Block.Slot = 32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since justification is now tracked in forkchoice, we can´t really have a justified root from epoch 0 as it should default to the genesis state.
f.store.justifiedCheckpoint = jc | ||
bj := f.store.bestJustifiedCheckpoint | ||
if bj == nil || jc.Epoch > bj.Epoch { | ||
if bj == nil || bj.Root == params.BeaconConfig().ZeroHash || jc.Epoch > bj.Epoch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These setters are only used at launch, we should consider moving these checkpoints to New()
. It is not done in this PR because of the extra large number of tests that would fail from such a change.
@@ -546,3 +567,40 @@ func (f *ForkChoice) SetGenesisTime(genesisTime uint64) { | |||
func (f *ForkChoice) SetOriginRoot(root [32]byte) { | |||
f.store.originRoot = root | |||
} | |||
|
|||
// CachedHeadRoot returns the last cached head root |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is a fast access to the last computed head. A future possible implementation would be to actually compute head when inserting a node. when processing attestations and on on_tick
. Then a call to Head()
would just return the cached value.
defer f.store.nodesLock.RUnlock() | ||
root := f.FinalizedCheckpoint().Root | ||
node := f.store.nodeByRoot[root] | ||
if node == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is to prevent a panic, but at runtime, if the finalized checkpoint is not in forkchoice we are doomed anyway.
} | ||
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root)) | ||
finalized := s.ForkChoicer().FinalizedCheckpoint() | ||
fRoot := s.ensureRootNotZeros(finalized.Root) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need ensureRootNotZeros
now that finalized root is derived from forkchoice
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately yes, checkpoints have to be zero root for genesis, so either we check against genesis root at Blockchain to zero it out for services that need that or we return zero from forkchoice. I wonder how much will break if we just return originRoot instead of zero from forkchoice
f.store.nodesLock.RLock() | ||
defer f.store.nodesLock.RUnlock() | ||
root := f.FinalizedCheckpoint().Root | ||
node := f.store.nodeByRoot[root] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check ok too?
node := f.store.nodeByRoot[root] | |
node, ok := f.store.nodeByRoot[root] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to change the signature of these functions. Forkchoice is sane in handling that the root is there. Do you suggest returning some bogus root if the root is not there or do you want to return an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dont need to change the signature. Just return [32]byte{}
. "This should not happen"
Co-authored-by: terencechain <terence@prysmaticlabs.com>
return ethpb.CopyCheckpoint(cp), nil | ||
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint { | ||
cp := s.ForkChoicer().FinalizedCheckpoint() | ||
return ðpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just use the helper here ? ethpb.CopyCheckpoint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because ForkChoicer
returns a forkchoicetypes.Checkpoint
not an ethpb.Checkpoint
, we'll move eventually to the former and then we'll add that helper back.
// PreviousJustifiedCheckpt returns the current justified checkpoint from chain store. | ||
func (s *Service) PreviousJustifiedCheckpt() *ethpb.Checkpoint { | ||
cp := s.ForkChoicer().PreviousJustifiedCheckpoint() | ||
return ðpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
r := s.headRoot() | ||
return r[:], nil | ||
if s.head != nil && s.head.root != params.BeaconConfig().ZeroHash { | ||
return s.head.root[:], nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should return a copy here. By taking the slice, you still reference the root in head
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks this is right I think.
func (s *Service) saveInitSyncBlock(r [32]byte, b interfaces.SignedBeaconBlock) { | ||
// This saves a beacon block to the initial sync blocks cache. It rate limits how many blocks | ||
// the cache keeps in memory (2 epochs worth of blocks) and saves them to DB when it hits this limit. | ||
func (s *Service) saveInitSyncBlock(ctx context.Context, r [32]byte, b interfaces.SignedBeaconBlock) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a violation of the single responsibility principle, any reason we want to couple them together like this ?
@@ -251,19 +211,19 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo | |||
}() | |||
|
|||
// Save justified check point to db. | |||
if postState.CurrentJustifiedCheckpoint().Epoch > currJustifiedEpoch { | |||
if justified.Epoch > currJustifiedEpoch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems like a big change, instead of using the post state's value we will instead use it from our forkchoice service. This doesn't seems to have anything to do with store, so why are we changing it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we want to guarantee to have DB to agree with forkchoice, the previous behavior was wrong, there are checks that one needs to make before saving a justified checkpoint, forkchoice does them, but the post-state does not. Ultimately, the one that should decide what the right justification is, is forkchoice and the rest should follow it.
Notice that below I'm passing the postState justification, that will also change in the very next PR cause it's making the next spectests fail, see
ethereum/consensus-specs#2923 for context.
tracing.AnnotateError(span, err) | ||
return err | ||
} | ||
if i > 0 && jCheckpoints[i].Epoch > jCheckpoints[i-1].Epoch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the event we only have 1 block in the whole batch, would this method not be able to updated the justified checkpoint ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we only have one block in a batch then the block is only inserted below in InsertNode
which does the full justification update. This is only used for intermediate blocks because we don't pass the full beaconState to forkchoice.
return err | ||
} | ||
} | ||
if i > 0 && fCheckpoints[i].Epoch > fCheckpoints[i-1].Epoch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment here
In this PR we remove package
store
and letforkchoice
track all justification and finalization information.It removes several functions from the
blockchain
package likeupdateJustified
andupdateJustifiedInitSync
. It also simplifies considerably block processing so that functions likehandleBlockAfterBatchVerify
are removed. Justification/Finalization update is purely done on block insertion and onon_tick
within forkchoice to be more aligned with how the spec does it.Future changes that are not covered in this PR are:
head
structure.Fixes: #10734