-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Store Incoming Blocks and Calculated States in DAG #422
Conversation
beacon-chain/blockchain/service.go
Outdated
return fmt.Errorf("could not hash incoming block: %v", err) | ||
log.Debugf("Could not hash incoming block: %v", err) | ||
} | ||
if block.SlotNumber() > c.lastSlot && block.SlotNumber() > 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to establish patterns for how to safely access properties concurrently. Take this one for example: ProcessBlock
is called by the sync service and checks the value of c.lastSlot
, but blockProcessing
could easily be modifying it at the same time, leading to a nasty race condition.
Not sure what the actionable item is yet, but ideally these fields are grouped together into distinct goroutines so that we don't have to use locks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that all of the actual "block processing" should be inside the blockProcessing
goroutine. That solves 2 potential issues: 1) Eliminates possibility of a race condition, 2) unblocks the sync goroutine, which is the one calling this function.
The goroutine should be split up so that updating head and writing to disk happens in the first loop, and then state transition happens in the second loop.
This function should also be renamed to PreProcessBlock
or something along those lines, since there really souldn't be any heavy lifting in this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is excellent feedback @rawfalafel, thank you. Completely agree with moving everything into one routine. When you say:
writing to disk happens in the first loop, and then state transition happens in the second loop
do you mean having a goroutine with different select cases handling each of these items?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, that's what i meant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updateHead
should come after c.chain.CanProcessBlock
. Doesn't make sense to call updateHead
if validation fails.
My understanding of the order of operations:
- Validate new block
- Persist block to disk (not state)
- Run fork choice rule
- Update active/crystallized state if the head changes
Does that sound correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, addressed this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I have one comment that stood out to me, although this looks sound overall.
beacon-chain/blockchain/service.go
Outdated
return fmt.Errorf("could not hash incoming block: %v", err) | ||
log.Debugf("Could not hash incoming block: %v", err) | ||
} | ||
if block.SlotNumber() > c.lastSlot && block.SlotNumber() > 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that all of the actual "block processing" should be inside the blockProcessing
goroutine. That solves 2 potential issues: 1) Eliminates possibility of a race condition, 2) unblocks the sync goroutine, which is the one calling this function.
The goroutine should be split up so that updating head and writing to disk happens in the first loop, and then state transition happens in the second loop.
This function should also be renamed to PreProcessBlock
or something along those lines, since there really souldn't be any heavy lifting in this function.
beacon-chain/blockchain/service.go
Outdated
// We store processed blocks and states into a slice by SlotNumber. | ||
// For example, at slot 5, we might have received 10 different blocks, | ||
// and a canonical chain must be derived from this DAG. | ||
processedBlocksBySlot map[uint64][]*types.Block |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These processedXXXBySlot fields are temporary, they will later be replaced by tree right? if yes, can we make a note of that in the comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, I'll add comments to clarify
beacon-chain/blockchain/service.go
Outdated
go c.run(c.ctx.Done()) | ||
head, err := c.chain.CanonicalHead() | ||
if err != nil { | ||
log.Errorf("Could not fetch latest canonical head from DB: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you cant fetch the canonical head, is error
too mild? should it be Fatal
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fatal would shut down the whole node, although you could argue that if the chain head cant be fetched the node is not of much use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed there should be fatal here, it is critical for the entire system to fetch a canonical head
beacon-chain/blockchain/service.go
Outdated
@@ -237,55 +206,102 @@ func (c *ChainService) CanonicalCrystallizedStateEvent() <-chan *types.Crystalli | |||
|
|||
// run processes the changes needed every beacon chain block, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment is still about run
beacon-chain/blockchain/service.go
Outdated
case block := <-c.latestProcessedBlock: | ||
// 3 steps: | ||
// - Compute the active state for the block. | ||
// - Compute the crystallized state for the block if epoch transition. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's no more epoch transition, can you change it to cycle transition?
beacon-chain/blockchain/service.go
Outdated
// | ||
// Another routine will run that will continually compute | ||
// the canonical block and states from this data structure using the | ||
// fork choice rule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
period
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extracting PoS logic into its own package's a good idea 👍 Noticed a few more things in this PR.
beacon-chain/casper/validator.go
Outdated
|
||
// SampleAttestersAndProposers returns lists of random sampled attesters and proposer indices. | ||
func SampleAttestersAndProposers(seed common.Hash, crystallized *types.CrystallizedState) ([]int, int, error) { | ||
attesterCount := math.Min(params.MinCommiteeSize, float64(crystallized.ValidatorsLength())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thiis isn’t your code but can you convert math.Min
to an if condition? It’s more verbose but we shouldn’t convert to a float. Pretty sure there’s a subtle bug here due to the conversion.
beacon-chain/blockchain/service.go
Outdated
activeState, err := c.chain.computeNewActiveState(common.BytesToHash([]byte(string(timestamp)))) | ||
// Process block as a validator if beacon node has registered, else process block as an observer. | ||
if c.validator { | ||
canProcess, err = c.chain.CanProcessBlock(c.web3Service.Client(), block, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this comment was addressed. Specifically, this validation step should come before updateHead
.
beacon-chain/casper/incentives.go
Outdated
|
||
// CalculateRewardsFFG adjusts validators balances by applying rewards or penalties | ||
// based on FFG incentive structure. | ||
func CalculateRewardsFFG(active *types.ActiveState, crystallized *types.CrystallizedState, block *types.Block) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we drop the FFG suffix? If this package is the FFG implementation of casper then this is redundant.
return nil, err | ||
} | ||
if headExists { | ||
bytes, err := b.db.Get([]byte(canonicalHeadKey)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is not tested. Should we have a test for when the head block exists in the database?
// ValidatorsByHeightShard splits a shuffled validator list by height and by shard, | ||
// it ensures there's enough validators per height and per shard, if not, it'll skip | ||
// some heights and shards. | ||
func ValidatorsByHeightShard(crystallized *types.CrystallizedState) ([]*BeaconCommittee, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method has no test coverage. Please add test
} | ||
|
||
// SampleAttestersAndProposers returns lists of random sampled attesters and proposer indices. | ||
func SampleAttestersAndProposers(seed common.Hash, crystallized *types.CrystallizedState) ([]int, int, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is untested
@@ -112,6 +126,9 @@ func (a *ActiveState) NewPendingAttestation(record *pb.AttestationRecord) { | |||
|
|||
// LatestPendingAttestation returns the latest pending attestaton fields. | |||
func (a *ActiveState) LatestPendingAttestation() *pb.AttestationRecord { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is also untested. Please consider adding a test for this method and the new lines added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 👍 👍
Hi all,
This PR is part of a series in creating the fork choice rule for the Beacon Chain. Currently, we treat every incoming block as canonical (that is, we set the canonical active/crystallized states by deriving them from incoming blocks and save them to persistent storage). A key part of the beacon node is being able to apply a fork choice rule to a DAG of blocks/states at each slot number to derive a canonical chain. This PR also abstracts all core casper functionality into a
/beacon-chain/casper
package as pure functions with related tests.This is part of #255.
Additionally, nodes should not sync beacon states between each other but should instead compute them from received blocks (the only exception to this is initial sync where nodes should share the crystallized state between each other until they are synced to the latest head).
Scope
This PR will do the following:
Persisting DAG across sessions for continuing where fork-choice left off will be done in a separate PR as part of #429.
Separate PRs will be needed to actually derive a canonical chain using the real fork choice rule.
How to Run
First, make sure you have a running geth miner. Then, start from a fresh beacondatadir. Run as follows:
You'll see incoming blocks being put into a DAG and a fork choice (albeit quite naive) being applied at each new received slot.