-
Notifications
You must be signed in to change notification settings - Fork 975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phase 0 Networking Specifications #763
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 much easier to discuss specifics in a PR than in an issue
I'm about half way through, will continue the review tomorrow.
@@ -0,0 +1,32 @@ | |||
ETH 2.0 Networking Spec - Node Identification |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we don't need to specify anything here as everything's already either part of the referenced EIP or multiaddr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, will remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc: @fjl
specs/networking/rpc-interface.md
Outdated
|
||
## RPC-Over-`libp2p` | ||
|
||
To facilitate RPC-over-`libp2p`, a single protocol path is used: `/eth/serenity/rpc/1.0.0`. Remote method calls are wrapped in a "request" structure: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add beacon
somewhere in the protocol path? I think this might be useful to distinguish between shard and beacon RPC commands.
) | ||
``` | ||
|
||
If an error occurs, a variant of the response structure is returned: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least with SSZ it's not easily possible to distinguish between normal and error responses, as one needs to know the schema before being able to decode the message. What one could do is have a general response format and then an embedded result/error blob that can be decoded in a second step. E.g.:
Response: (
id: uint64
status_code: uint64
data: bytes
)
SuccessData: (
...
)
ErrorData (
...
)
Not really elegant, but I don't really see a better solution (for SSZ that is).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this is a good point. SSZ doesn't support null
values either - let me think on this one for a little bit and come up with a solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an is_error
boolean field. Note that with SSZ at least you can read the is_error
field prior to the contents of the result
via offsets. This allows clients to switch the deserialized type based on the is_error
value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the alternative would be to use a list - empty if there's no error, and one item if there is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to be clear - when encoding or decoding ssz, there generally exists no provision for skipping fields - even if is_error
is false, data
must contain bytes. embedding a StatusData
in the data field seems to go against the spirit of SSZ generally, as SSZ decoders in general expect to know the exact type of each field, thus would not fit "naturally" in "normal" ssz code.
That said, this issue stems from using SSZ in a wire protocol setting for which it is not.. great.
specs/networking/rpc-interface.md
Outdated
|
||
The "method ID" fields in the below messages refer to the `method` field in the request structure above. | ||
|
||
The first 1,000 values in `error.code` are reserved for system use. The following error codes are predefined: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the error codes, this seems very useful (e.g. for block not found
or something). Not sure about the examples below though, shouldn't 0
, 10
, and 20
just result in a disconnect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sort of like port numbers :)
|
||
Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions: | ||
|
||
1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should consider spelling out network ID and chain ID as separate fields. Chain ID should be set to a fixed number "1" for ETH, and if others want to run their own chain they can change that ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).
|
||
For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique. | ||
|
||
The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One other consideration maybe: ENR (and Discovery v5) is being designed to support multiple types of identity. It is not going to be a hard requirement that secp256k1 EC pubkeys will identify the node. ENRs will describe the identity type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mention in https://github.com/libp2p/specs/pull/100/files#r266291995 - protobuf is not deterministic, and thus not great for feeding into a hashing function or using to determine an ID, unless you used a modified protobuf version that's locked down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this be handled at the libp2p
layer? Here we're describing how to construct a multiaddr
from an ENR; the actual handling of the multiaddr
itself and the underlying hash construction would be the responsibiliy of libp2p
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would, but libp2p itself looks broken in this case - we need to keep an eye on that upstream issue so that we don't spread the breakage further.
Does using ENR require decoding RLP in this context?
|
||
Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions: | ||
|
||
1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?
|
||
Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions: | ||
|
||
1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).
Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions: | ||
|
||
1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client. | ||
2. If the `latest_finalized_root` shared by the peer is not in the client's chain at the expected epoch. For example, if Peer 1 in the diagram below has `(root, epoch)` of `(A, 5)` and Peer 2 has `(B, 3)`, Peer 1 would disconnect because it knows that `B` is not the root in their chain at epoch 3: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe clarify that this is (because it can only be) checked by the peer with the higher latest finalized epoch. I tried to come up with a one sentence fix, but it's probably better, to rewrite the whole paragraph from the point of view of one node shaking hands with another node (right now it's talking about both at the same time).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool got it, will do.
specs/networking/rpc-interface.md
Outdated
|
||
- `1`: Client shut down. | ||
- `2`: Irrelevant network. | ||
- `3`: Irrelevant shard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This deals with the beacon chain only, so there are no shards. I think we should have a completely separate protocol and separate connections for shard networks.
specs/networking/rpc-interface.md
Outdated
) | ||
``` | ||
|
||
Client MAY send `goodbye` messages upon disconnection. The reason field MUST be one of the following values: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more from the top of my head that might be helpful:
- too many peers
- not helpful
- malicious/faulty
- wrong chain
- sync finished (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we still be connected after sync finished
? We would still need to propagate any newly proposed blocks to our peers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally, the standard way to sync in these kinds of "live" protocols is to start listening to broadcasts, then initiate sync.. else you'll miss packets during sync and will have to recover again.
|
||
``` | ||
( | ||
start_root: HashTreeRoot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the root? It seems redundant to me, except for the case of chain reorgs which shouldn't happen frequently at sync (and even then, it's probably better to get blocks from the current chain that we'll be able to use later, instead of getting outdated ones).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ẁe need a mechanism for recovering blocks, in case something is lost or the client goes offline for a short bit and loses a few (computer went to sleep / ISP went down for 10 minutes).
I argue in the original issue (#692 (comment)) that it's often natural to request blocks backwards for this reason: the data structure we're syncing is a singly linked list pointing backwards in time and we receive attestations and blocks that let us discover heads "naturally" by listening to the broadcasts. With a block_root+previous_n_blocks
kind of request we can both sync and recover, and for example use attestations to discover "viable" heads to work on, from a sync or recovery perspective. Indeed, negotiating finalized epochs in the handshake is somewhat redundant in that case, albeit a nice optimization (except for the chain id) - we could equally well request blocks from the peer that gossiped us the block or attestation whose parent we're missing - they should not be gossiping attestations they have not linked to a finalized epoch of value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! To summarize my understanding of your comment: Syncing forwards is safer as we can verify each block immediately when we receive it, but syncing backwards is more efficient/doesn't require additional database indexing (and I guess syncing forwards may require a negotiating phase to discover the best shared block). You're proposing to interpret the fact that I see lots of attestations on top of my sync peer's head flying around the network as evidence that their head is valid? And therefore, I'd be pretty safe syncing backwards?
That sounds reasonable. My original concern was that this requires me to know (at least some good fraction of) the validator set as otherwise my sync peer could create lots of fraudulent attestations for free that I have no chance of verifying. But I would notice this if I have at least one single honest peer (if I try to sync from them or compare the attestations coming from them).
Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
more efficient
In terms of network / bandwidth, I'd say it's about the same but there are some nuances:
- in forward sync, I can ask for "more" slots than already exist, potentially saving round trips - a client could use this to request "all" data at the time of request arrival. consider the following race: A sends request, a new block is produced, B receives request (similar: B starts sending response which takes time, new block is produced).
- in backward sync, one could have an
(latest-head, known_slot_number)
request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head. - both above races are generally solved by collecting data from broadcasts while syncing (classic subscribe-then-sync pattern) - they are mainly concerns if you don't want to subscribe or want to delay subscribing.
- in forward sync, I might end up on a different branch / head than I thought I would - the request itself does not point one out
In terms of client implementations, I think of backward sync as biased to make it cheaper for the server: the server already has the data necessary - also because the head is kept hot - while the client has to keep a chain of "unknown" blocks around / can't validate eagerly. An additional rule that the response must be forward-ordered could help the client apply / validate the blocks eagerly.
The backwards sync can be seen as more passive/reactive/lazy while forward sync is more active..
attestations on top of my sync peer's head flying around the network as evidence that their head is valid
right. the assumption rests on several premises (thanks @djrtwo!):
- honest clients will not propagate invalid data (not signed by a validator they know)
- there's a slashing condition on creating unviable attestations - there's currently no penalty to create & sign an unviable block so one can perhaps imagine a malicious group of validators creating lots of these and for example spam everyone during important periods "for free". It sounds a bit far fetched though, tbh, to be creating blocks this way - would love to hear thoughts.
- I've weak-subjectively selected an initial state that contains some validators. I'd primarily look for anything signed by those validators as another heuristic for where to start syncing (even if the validator set might have changed from there).
Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).
I'm not sure :) I'm curious to hear feedback on this point, but here are some thoughts:
- it's important that we have a request like Hello to ask for what clients consider to be the head for non-gossiping use cases - but I think that's orthogonal to the sync direction.
- clients should be free to send that request at any time, not just during the initial negotiation phase
- direction can be different for request and response - if different, requires a slightly "smarter" server
- there's a cost for each direction, in terms of implementation. I'd start with one and look for strong motivations before implementing the other, as the returns are not that great. Either direction is sufficient, really.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).
It seems reasonable to sync backwards from the latest received gossiped block (at least as an initial implementation)
in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.
Do we really need start_slot? if we give clients the option to request a block by either start_slot or start_root then that forces us to maintain a lookup or search mechanism for both. if we are saying that both fields (start_slot and start_root) required to sync, then I would disagree. we should be able to simply perform a lookup by block_root and walk the chain backwards until we reach max_headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
latest received gossiped block
or even better, latest gossiped attestation
Do we really need start_slot?
I would say that if we go with backwards sync, we should not implement forwards sync here or elsewhere unless there's a strong case for that direction. Having to implement both directions negates some of the benefits of backward sync and adds implementation surface.
It is quite possible to add forward sync in a later version of the protocol as well should it prove necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or even better, latest gossiped attestation
I can dig that
@arnetheduck or anyone else. Why do we need start_slot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few initial thoughts.
Visually, a message looks like this: | ||
|
||
``` | ||
+--------------------------+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If other comments are accepted, this enveloping can go away.
@@ -0,0 +1,32 @@ | |||
ETH 2.0 Networking Spec - Node Identification |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?
|
||
For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique. | ||
|
||
The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
specs/networking/rpc-interface.md
Outdated
|
||
``` | ||
( | ||
headers: []BlockHeader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I note BlockHeader is not defined in the beacon chain spec. I opened a PR to define it as a struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's not needed specifically for the spec, we could also just define it here.
specs/networking/rpc-interface.md
Outdated
|
||
## RPC-Over-`libp2p` | ||
|
||
To facilitate RPC-over-`libp2p`, a single protocol path is used: `/eth/serenity/rpc/1.0.0`. Remote method calls are wrapped in a "request" structure: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the semantic meaning of these long version numbers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would imagine it's there because over time it will be bugfixed (bugfix version), updated with a commitment to being backward compatible (minor version), and updated with complete disregard for any backward compatibility, for the sake of progress (major version)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the idea is that they follow semver. In practice I'd estimate that the only time we'd change these version numbers is if there was a backwards-incompatible change to the serialization/compression scheme.
See @raulk's point above re: message envelopes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A version number can either be interpreted or not. If we rely on semver, we should specify the correct behavior for clients: consider client a that supports 1.0.0 - should it also accept 1.0.1 messages as valid automatically, or discard them? This matters for forwards compatibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Frankly, I'm in favor of simply having integer version numbers and have a blanket statement that sub-protocol version numbers are neither forwards nor backwards compatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that would be my preference as well, with how the encoding and protocol looks today.
if we had a serialization format that allowed forwards/backwards compatible additions (ie adding fields) we could maybe consider two-level numbering here, where the first number would be the blanket statement, while the second would signal the version with additional fields added, which would still be compatible with previous clients.
Such an encoding is generally a good thing in the wire use case, which would be a reason to look to extensions to SSZ when used outside consensus (a super-set of SSZ for example).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on integers to signal a generation (generation 1, generation 2...). Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities? @arnetheduck
I would model serialisation format and compression as part of the protocol ID. Then allow Multistream to negotiate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A possible compatible change could be added message types, so I think minor version numbers could be useful in some cases.
Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities?
Capabilities are known from the discovery protocol/ENRs already (but we need to define what types of capabilities we need). So I don't think we need it in the HELLO message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@raulk taking a step back, my initial understanding of the libp2p setup was that you would negotiate capabilities with discovery mainly and connect to clients you know you have common ground with - then merely verify the support by signing up to the various streams here and that each stream would be a protocol on its with libp2p dealing with the mulitplexing etc - that has changed now I see, and it looks like there's another layer of protocol negotiation within the stream to discover capabilities - that feels.. redundant, to do the same work twice, and somewhat limiting, because how does a client add a completely new message they want to test or use in some client-specific scenario (for example to establish / evaluate its usefulness) - but it seems I need to reread the newer spec.
I'll be honest though and say that I don't fully understand where the varint would go at this point with the various layers, but integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it. Numbers.. you need a registry and the associated maintenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arnetheduck my intention isn't to propose any changes here; was just curious to hear the rationale of the Eth2.0 community re: finer-grained protocols vs. coarse-grained protocol with capabilities. We also debate this occasionally in the libp2p community ;-) [libp2p supports both patterns].
Re: semver vs. protocol generations. libp2p does not impose a specific version token (if any). When registering a handler, you can attach a predicate to evaluate the match. So handler X could match N versions, and when receiving the callback, you can inspect the protocol pinned on the stream to infer which abilities you activate, etc.
We've traditionally used semver it in libp2p, IPFS, etc., but a few of us are not convinced of its aptness. Semver is good to convey the severity of changes in APIs, but protocol evolution is a different beast.
You generally strive to keep things backwards compatible, yet enable feature upgrades/opt-ins over time that may not be rolling cumulative, e.g. if 1.14.0 and 1.15.0 introduce feature X and Y respectively, how do I convey that a given peer supports Y but not X?
That's where protocol granularity comes into play: potentially make each feature/message/RPC a different protocol, and track "generations/revisions" of those protocols. libp2p supports that design. A few thoughts:
- We're evolving Multistream to avoid round trips when you're certain the other party supports a protocol (selection vs negotiation), for example, via an ENR.
- If two messages have timing dependencies (can't send message B until after A), and they segregated across protocols, it may make state-tracking a bit more complicated.
integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it.
I meant replacing semver by protocol generations, e.g. /eth/serenity/rpc/v10
. Sorry for not being clear!
We have a call tomorrow to go over the wire protocol, after which I'll update this PR with the group's decision. Ping me on Gitter if you'd like an invite to the call. |
Can someone familiar with libp2p draw a diagram of all payload bytes included in a typical packet, all the way from.. transport (tcp), including envelopes, wrappers, multiplexers etc? |
latest_finalized_root: bytes32 | ||
latest_finalized_epoch: uint64 | ||
best_root: bytes32 | ||
best_slot: uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slots are based on wall time - what's the best_slot field for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pretty sure this is supposed to refer to the slot of the head block. Maybe rename best_root
and best_slot
to head_root
and head_slot
(or to be even more clear head_block_root/slot
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think head_block_root
and head_slot
would be clearer
specs/networking/rpc-interface.md
Outdated
) | ||
``` | ||
|
||
Send a list of block roots and slots to the requesting peer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which beacon block roots? known heads? all?
) | ||
``` | ||
|
||
Requests the `block_bodies` associated with the provided `block_roots` from the peer. Responses MUST return `block_roots` in the order provided in the request. If the receiver does not have a particular `block_root`, it must return a zero-value `block_body` (i.e., a `block_body` container with all zero fields). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that when everything is going smoothly, block bodies consist of very few attestations (they should be combined by then), and a few minor items like the transfers etc. has anything looked at the numbers to see how much value there is in having separate requests for headers and bodies? Requesting headers then bodies creates additional round-trips which are a cost on its own.
For the sake of expediency, we're going to be implementing a few changes in Hobbits to provide an easy and modular mechanism that allows clients to start talking to one another. For the time being, connections will be established via basic TCP with a fixed port (9000 is what we had discussed and agreed upon in the call). We'll be changing the Hobbits spec to assume the default serialization method SSZ and we're going to convert it from being a text to a binary protocol in version 0.2. |
latest_finalized_root: bytes32 | ||
latest_finalized_epoch: uint64 | ||
best_root: bytes32 | ||
best_slot: uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think head_block_root
and head_slot
would be clearer
+---+ | ||
``` | ||
|
||
Once the handshake completes, the client with the higher `latest_finalized_epoch` or `best_slot` (if the clients have equal `latest_finalized_epoch`s) SHOULD request beacon block roots from its counterparty via `beacon_block_roots` (i.e., RPC method `10`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this be handled if the clients both have equal latest_finalized_epoch
and best_slot
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then you discard the client because you know it's either at genesis or providing invalid data.. finalization happens several epochs behind best_slot
in the happy case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arnetheduck ah no, I was referring to if both clients have the same best_slot
and finalized_epoch
together . So FinalizedEpoch A == FinalizedEpochB and BestSlot A == BestSlot B
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah. right. I find that the situation is somewhat analogous to receiving a block whose parent is unknown to it - you have to make a very similar decision there - the information in this hello, just like in the parentless block, is essentially useless, from a trust perspective, and you need to turn to other sources.
@djrtwo suggested that attestations might be a good heurestic as signing one carries risk for the validator that does so. The information here can, from what I can see, be used to quickly disconnect from a client, if they're saying the network is different. The rest is advisory, and you're hoping for the best.
specs/networking/rpc-interface.md
Outdated
) | ||
``` | ||
|
||
Client MAY send `goodbye` messages upon disconnection. The reason field MUST be one of the following values: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we still be connected after sync finished
? We would still need to propagate any newly proposed blocks to our peers
When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as cc @raulk to confirm that this is the best way to do it. |
Correct, we don't need this enveloping at all with |
I think this might be a good time to split up this spec:
then, from the application level protocol we have a list of properties our transport layer must have: In summary, we'd have several documents (or sections of appendicec etc):
Some of the coordination issues we see stem from it being pretty early for protocol discussions, specially cross-client ones. Splitting it up will allow us to make progress on the application-level messages without being too distracted by other ongoings. |
@mslipper general current critiques:
@mslipper re: binary protocol 0.2
;) awaits feedback in bunker |
Note sure what you mean by this... the 'responses' as defined here contain the result of the procedure call or an error. Are you referring to defined error codes?
What do you mean by headers in this case? If you want an additional 'command' (in Hobbits parlance, anyway) you'd just need to define a new
Can you give an example of the type of metadata you are referring to?
This would mean the introduction of a handshake process, which defeats the purpose of having a simple intermediary wire format to tide us over until
I'd honestly prefer not to do this, since defining them would create an implicit requirement within this spec to support multiple compression algorithms. I'd much rather we decide on one as a group to reduce implementation overhead. This is why |
@arnetheduck I really like this idea... what do you think of doing this separately from this PR? I'm sure that there will be additional feedback generated from those documents. |
Yes. I think we need flexibility and having aspects like serialisation method separated from message format and semantics will be of more immediate utility than say wire performance. |
I think we're pretty close already in terms of how the spec is sectioned (nice work!) - just need to make the distinction slightly more explicit. It's mostly an administrative matter to facilitate discussion, make a dedicated space for separate non-libp2p discussion, and establish terminology once and for all (what's "wire" and "application" and so on). Can be done at any time, whenever feels convenient. Eventually, I also hope it will lead to a more clearly documented interface between eth2 and the underlying overlay (haha). |
Omitting generic rpc response codes will require us produce an explicitly coded 'response' message to be paired with each said request.. If message A gets sent via rpc, then its nice to not have to create a message AResponse to indicate the response permutations of 'success' and 'error' at the very least.
Not referring to commands -- referring to pieces of metadata which our core protocol could not have possibly foreseen use cases for in the future -- for example -- proxies and/or routing topologies will need to tag extra metadata to efficiently push decisions and context to the next hop. rather than having to modify our protocol, they can just adopt new header extensions. Cache invalidation nonces are another use case.
See above -- I know there are more use cases but I cannot express them at the moment. The main idea is i should be able to grab lightweight metadata when looking up objects to see if i need the full object (maybe it hasn't changed since last time I asked).
Let's say that our request looks like: Last but not least I reserved codes 406, and 407 to indicate 'request compression not supported' and 'response compression not supported' respectively -- this leaves a way in the future for us to get out of the one-compression trap without having to change the protocol. After the first request, the server will respond with the intersection of the most preferred yet supported protocol of the client requestor for the response payload -- the requestor can then use this knowledge to use its most preferred/supported compression on subsequent requests. For the time being, the request line to avoid this negotiation the first implementation should just be:
OR
to avoid all negotiation issues for now. |
I support this. Maybe we can even abstract away the serialization format from the application level protocol in order to isolate another heavily discussed component. |
|
||
### Alternative for Non-`libp2p` Clients | ||
|
||
Since some clients are waiting for `libp2p` implementations in their respective languages. As such, they MAY listen for raw TCP messages on port `9000`. To distinguish RPC messages from other messages on that port, a byte prefix of `ETH` (`0x455448`) MUST be prepended to all messages. This option will be removed once `libp2p` is ready in all supported languages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should allow using a separate port. It's entirely possible a client will allow both communication modes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this paragraph should go to the top since it relates to the envelope and port 9000.
|
||
``` | ||
( | ||
sha: bytes32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't ask when this got drafted, sorry. What is sha here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commit hash of the node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is that? Best root?
|
||
## Encoding Nibble Values | ||
|
||
- `0x1`: SSZ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Permission to add 0x2
for BSON please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it would be best if we could agree on a single encoding format to maximize compatibility and minimize implementation overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would prefer not, for the same reasons articulated on our call - we should agree together on a single encoding scheme.
|
||
## Compression Nibble Values | ||
|
||
- `0x0`: no compression |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Permission to add 0x1
for snappy compression please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above - I don't want to commit teams to an implementation without getting consensus.
) | ||
``` | ||
|
||
Requests a list of block roots and slots from the peer. The `count` parameter MUST be less than or equal to `32768`. The slots MUST be returned in ascending slot order. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some questions regarding the response:
- Can you skip a slot (e.g.,
1, 3, 4
)? - Can you return less/more than the requested roots?
- Can you start at higher slot?
- Can you return a slot outside of the
start_slot + count
bounds?
Maybe "The slots MUST be returned in ascending slot order." is succinct already? If this is the case we could add something like "The only requirements for roots
are ...".
P.S. this is my first comment, thanks for making this doc!
Yes but the protocol could allow many😀
On 23 Mar 2019 20:04, jannikluhn <notifications@github.com> wrote:
@jannikluhn commented on this pull request.
________________________________
In specs/networking/messaging.md<#763 (comment)>:
++--------------------------+
+| |
+| body |
+| |
++--------------------------+
+```
+
+Clients MUST ignore messages with mal-formed bodies. The compression/encoding nibbles MUST be one of the following values:
+
+## Compression Nibble Values
+
+- `0x0`: no compression
+
+## Encoding Nibble Values
+
+- `0x1`: SSZ
I believe it would be best if we could agree on a single encoding format to maximize compatibility.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#763 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/Af9nzlrSrHrMARYqwpGfb0H9Sf0dUHc5ks5vZnrAgaJpZM4br2Rf>.
|
( | ||
id: uint64 | ||
response_code: uint16 | ||
result: bytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use one structure? You can tell if it's a response by comparing request IDs.
|
||
### Beacon Block Bodies | ||
|
||
**Method ID:** `12` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just like in LES, I would use different method ids for requests and responses. So it's possible for me to send you proactively blocks and headers using RPC, and you don't need to know about it in advance.
You don't need to wait for multistream 2.0. When you open a new stream, multistream 1.0 is in charge of associating that stream with a protocol. To be clear:
|
Generally, I see we've gained a lot of different Considering sizes of payloads and taking into consideration the departure from rest of the eth2 spec, how do we feel about sticking with |
chain_id: uint64 | ||
latest_finalized_root: bytes32 | ||
latest_finalized_epoch: uint64 | ||
best_root: bytes32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking into account the backwards sync suggested elsewhere, and that we can use attestations as a (strong) heuristic that a block is valid and useful, it seems prudent to include (some) attestations here - instead of simply supplying some data like best_root that cannot be trusted anyway, a recent attestation would help the connecting client both with head / fork selection and to know with a higher degree of certainty that the root sent "makes sense" and should be downloaded.
The details of this are TBD - but probably we're looking at something like attestations: [Attestation]
where it's up to the client to choose a representative and recent set (or none, which is also fine, because then one can listen to broadcasts).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm merging this. We can take conversation to issues tagged network
and to subsequent PRs
Supplants #692. Notable changes:
libp2p
protocol name for each. This fits better withlibp2p
's stream selection logic.