You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
General idea is to switch beacon_node backend to libp2p using daemon and develop a simple protocol on top using SSZ as serialization.
Further, the idea is to introduce a management layer that contains logic for dealing with retrying requests and coordinating peer scoring. Basically, the attestation and block pools signal the hashes they need and a separate layer decides on the logic to fetch these from the peer layer (how many concurrent requests, when to retry the same block). When blocks arrive, either from broadcasts or requests, they should flow into the pool the same way.
Switch to libp2p via daemon (postponed)
Specify simple SSZ-based messages for network operations
State sync
When joining the testnet, client will be behind. We will regularly restart the testnet in the beginning, thus we primarily need to have the capability to catch up by "full sync" - downloading all blocks. The other case where blocks are needed is when an attestation or block is received, and the dependent blocks are not (lost in translation, missing history, unknown fork etc)
Block request (request by hash or equivalent)
State recovery (low prio)
State diff / light client (low prio)
Broadcast
After validating or proposing blocks, these will be (naively) gossiped all other participants so they can count votes and decide on forks. The most simple implementation idea seems to be to publish attestations with a single signature, then aggregate lazily as needed (for example when proposing a block).
Attestations
Proposer blocks
libp2p considerations
When switching to libp2p, make sure these issues are covered and used correctly, as a minimum:
Peer/service discovery, including features - use private libp2p network or piggyback on ipfs?
Version negotiation - spec version, protocol features
Fork management
Forks start with the latest finalized block and build a tree of possible futures from there. The idea is to manage known blocks and attestations as a collection, and take action to fill out that collection as needed.
There are many race conditions that all need to be handled gracefully, and the code should have room to modify the strategy for handling these:
attestation with unknown block
block with unknown parent
etc
One problem to consider is that of worst case performance, in case of malicious blocks being posted by validators (for example, lots of unviable forks / blocks causing data structure and network traffic growth)
Attestation pool
Block pool
Validator management
Adding and removing validators is somewhat in flux, so for now the plan is to not use the ETH1 contract for this feature. Initially, we'll just publish JSON files with validator data (priv key etc) and manage overlap socially. The majority will likely be used in pre-configured beacon nodes running on a server, while some will be reserved for developers to play with.
Potential issues include two people running the same validator - this is a feature as it will help us discover issues when this happens (for example if we receive an attestation signed by our own key that we did not send, this is a warning sign that the private key is being reused).
Share validator JSON, let people manage socially
Web service to add/remove validators (low prio)
Devops
We'll initially deploy one or more boot nodes on a server, each hosting a number of validators. The general idea is to restart the testnet frequently. People wanting to connect will get genesis and validator information from the server via.. whatever (http listing).
Servers & automated deployment
Monitoring (logs etc)
Extra points for having a Grafana or similar deployment, with graylog or elasticsearch/logstash collecting logs from the nodes, to be able to monitor the network real-time
Configurability
Don't wanna run a big network just yet 😄
Allow configuration of shard count etc, so as to create a smaller network
Spec updates
The general idea is to follow spec releases by updating every time there's a new upstream release.
Verify / review v0.5.1 compatibility to reach a stable point
Version / release nim-beacon-chain according to spec version it supports
The text was updated successfully, but these errors were encountered:
Networking
General idea is to switch
beacon_node
backend tolibp2p
using daemon and develop a simple protocol on top using SSZ as serialization.Further, the idea is to introduce a management layer that contains logic for dealing with retrying requests and coordinating peer scoring. Basically, the attestation and block pools signal the hashes they need and a separate layer decides on the logic to fetch these from the peer layer (how many concurrent requests, when to retry the same block). When blocks arrive, either from broadcasts or requests, they should flow into the pool the same way.
libp2p
via daemon (postponed)State sync
When joining the testnet, client will be behind. We will regularly restart the testnet in the beginning, thus we primarily need to have the capability to catch up by "full sync" - downloading all blocks. The other case where blocks are needed is when an attestation or block is received, and the dependent blocks are not (lost in translation, missing history, unknown fork etc)
Broadcast
After validating or proposing blocks, these will be (naively) gossiped all other participants so they can count votes and decide on forks. The most simple implementation idea seems to be to publish attestations with a single signature, then aggregate lazily as needed (for example when proposing a block).
libp2p considerations
When switching to
libp2p
, make sure these issues are covered and used correctly, as a minimum:Fork management
Forks start with the latest finalized block and build a tree of possible futures from there. The idea is to manage known blocks and attestations as a collection, and take action to fill out that collection as needed.
There are many race conditions that all need to be handled gracefully, and the code should have room to modify the strategy for handling these:
One problem to consider is that of worst case performance, in case of malicious blocks being posted by validators (for example, lots of unviable forks / blocks causing data structure and network traffic growth)
Validator management
Adding and removing validators is somewhat in flux, so for now the plan is to not use the ETH1 contract for this feature. Initially, we'll just publish JSON files with validator data (priv key etc) and manage overlap socially. The majority will likely be used in pre-configured beacon nodes running on a server, while some will be reserved for developers to play with.
Potential issues include two people running the same validator - this is a feature as it will help us discover issues when this happens (for example if we receive an attestation signed by our own key that we did not send, this is a warning sign that the private key is being reused).
Devops
We'll initially deploy one or more boot nodes on a server, each hosting a number of validators. The general idea is to restart the testnet frequently. People wanting to connect will get genesis and validator information from the server via.. whatever (http listing).
Configurability
Don't wanna run a big network just yet 😄
Spec updates
The general idea is to follow spec releases by updating every time there's a new upstream release.
nim-beacon-chain
according to spec version it supportsThe text was updated successfully, but these errors were encountered: