Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phase 0 Networking Specifications #763

Merged
merged 11 commits into from
Mar 28, 2019
Merged

Phase 0 Networking Specifications #763

merged 11 commits into from
Mar 28, 2019

Conversation

mslipper
Copy link
Contributor

@mslipper mslipper commented Mar 12, 2019

Supplants #692. Notable changes:

  • Specs are split into messaging, identity, and RPC documents. I've left out broadcast for now so that we can discuss that separately.
  • The RPC protocol uses something more akin to JSON-RPC on the wire in order to identify individual methods to call rather than using a separate libp2p protocol name for each. This fits better with libp2p's stream selection logic.

@mslipper mslipper changed the title Add networking specs Phase 0 Networking Specifications Mar 12, 2019
@mslipper mslipper mentioned this pull request Mar 12, 2019
Copy link
Contributor

@jannikluhn jannikluhn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 much easier to discuss specifics in a PR than in an issue

I'm about half way through, will continue the review tomorrow.

specs/networking/messaging.md Outdated Show resolved Hide resolved
@@ -0,0 +1,32 @@
ETH 2.0 Networking Spec - Node Identification
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like we don't need to specify anything here as everything's already either part of the referenced EIP or multiaddr.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, will remove.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc: @fjl

specs/networking/rpc-interface.md Outdated Show resolved Hide resolved

## RPC-Over-`libp2p`

To facilitate RPC-over-`libp2p`, a single protocol path is used: `/eth/serenity/rpc/1.0.0`. Remote method calls are wrapped in a "request" structure:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add beacon somewhere in the protocol path? I think this might be useful to distinguish between shard and beacon RPC commands.

)
```

If an error occurs, a variant of the response structure is returned:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least with SSZ it's not easily possible to distinguish between normal and error responses, as one needs to know the schema before being able to decode the message. What one could do is have a general response format and then an embedded result/error blob that can be decoded in a second step. E.g.:

Response: (
    id: uint64
    status_code: uint64
    data: bytes
)

SuccessData: (
...
)

ErrorData (
...
)

Not really elegant, but I don't really see a better solution (for SSZ that is).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this is a good point. SSZ doesn't support null values either - let me think on this one for a little bit and come up with a solution.

Copy link
Contributor Author

@mslipper mslipper Mar 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an is_error boolean field. Note that with SSZ at least you can read the is_error field prior to the contents of the result via offsets. This allows clients to switch the deserialized type based on the is_error value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the alternative would be to use a list - empty if there's no error, and one item if there is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be clear - when encoding or decoding ssz, there generally exists no provision for skipping fields - even if is_error is false, data must contain bytes. embedding a StatusData in the data field seems to go against the spirit of SSZ generally, as SSZ decoders in general expect to know the exact type of each field, thus would not fit "naturally" in "normal" ssz code.

That said, this issue stems from using SSZ in a wire protocol setting for which it is not.. great.

specs/networking/rpc-interface.md Show resolved Hide resolved

The "method ID" fields in the below messages refer to the `method` field in the request structure above.

The first 1,000 values in `error.code` are reserved for system use. The following error codes are predefined:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the error codes, this seems very useful (e.g. for block not found or something). Not sure about the examples below though, shouldn't 0, 10, and 20 just result in a disconnect?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sort of like port numbers :)


Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should consider spelling out network ID and chain ID as separate fields. Chain ID should be set to a fixed number "1" for ETH, and if others want to run their own chain they can change that ID.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).


For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique.

The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One other consideration maybe: ENR (and Discovery v5) is being designed to support multiple types of identity. It is not going to be a hard requirement that secp256k1 EC pubkeys will identify the node. ENRs will describe the identity type.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.

Copy link
Contributor

@arnetheduck arnetheduck Mar 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mention in https://github.com/libp2p/specs/pull/100/files#r266291995 - protobuf is not deterministic, and thus not great for feeding into a hashing function or using to determine an ID, unless you used a modified protobuf version that's locked down.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this be handled at the libp2p layer? Here we're describing how to construct a multiaddr from an ENR; the actual handling of the multiaddr itself and the underlying hash construction would be the responsibiliy of libp2p.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would, but libp2p itself looks broken in this case - we need to keep an eye on that upstream issue so that we don't spread the breakage further.

Does using ENR require decoding RLP in this context?

specs/networking/messaging.md Outdated Show resolved Hide resolved

Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?


Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).

Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.
2. If the `latest_finalized_root` shared by the peer is not in the client's chain at the expected epoch. For example, if Peer 1 in the diagram below has `(root, epoch)` of `(A, 5)` and Peer 2 has `(B, 3)`, Peer 1 would disconnect because it knows that `B` is not the root in their chain at epoch 3:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe clarify that this is (because it can only be) checked by the peer with the higher latest finalized epoch. I tried to come up with a one sentence fix, but it's probably better, to rewrite the whole paragraph from the point of view of one node shaking hands with another node (right now it's talking about both at the same time).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool got it, will do.

specs/networking/rpc-interface.md Outdated Show resolved Hide resolved

- `1`: Client shut down.
- `2`: Irrelevant network.
- `3`: Irrelevant shard.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This deals with the beacon chain only, so there are no shards. I think we should have a completely separate protocol and separate connections for shard networks.

)
```

Client MAY send `goodbye` messages upon disconnection. The reason field MUST be one of the following values:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more from the top of my head that might be helpful:

- too many peers
- not helpful
- malicious/faulty
- wrong chain
- sync finished (?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we still be connected after sync finished ? We would still need to propagate any newly proposed blocks to our peers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally, the standard way to sync in these kinds of "live" protocols is to start listening to broadcasts, then initiate sync.. else you'll miss packets during sync and will have to recover again.

specs/networking/rpc-interface.md Outdated Show resolved Hide resolved
specs/networking/rpc-interface.md Outdated Show resolved Hide resolved

```
(
start_root: HashTreeRoot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the root? It seems redundant to me, except for the case of chain reorgs which shouldn't happen frequently at sync (and even then, it's probably better to get blocks from the current chain that we'll be able to use later, instead of getting outdated ones).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ẁe need a mechanism for recovering blocks, in case something is lost or the client goes offline for a short bit and loses a few (computer went to sleep / ISP went down for 10 minutes).

I argue in the original issue (#692 (comment)) that it's often natural to request blocks backwards for this reason: the data structure we're syncing is a singly linked list pointing backwards in time and we receive attestations and blocks that let us discover heads "naturally" by listening to the broadcasts. With a block_root+previous_n_blocks kind of request we can both sync and recover, and for example use attestations to discover "viable" heads to work on, from a sync or recovery perspective. Indeed, negotiating finalized epochs in the handshake is somewhat redundant in that case, albeit a nice optimization (except for the chain id) - we could equally well request blocks from the peer that gossiped us the block or attestation whose parent we're missing - they should not be gossiping attestations they have not linked to a finalized epoch of value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting! To summarize my understanding of your comment: Syncing forwards is safer as we can verify each block immediately when we receive it, but syncing backwards is more efficient/doesn't require additional database indexing (and I guess syncing forwards may require a negotiating phase to discover the best shared block). You're proposing to interpret the fact that I see lots of attestations on top of my sync peer's head flying around the network as evidence that their head is valid? And therefore, I'd be pretty safe syncing backwards?

That sounds reasonable. My original concern was that this requires me to know (at least some good fraction of) the validator set as otherwise my sync peer could create lots of fraudulent attestations for free that I have no chance of verifying. But I would notice this if I have at least one single honest peer (if I try to sync from them or compare the attestations coming from them).

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

Copy link
Contributor

@arnetheduck arnetheduck Mar 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more efficient

In terms of network / bandwidth, I'd say it's about the same but there are some nuances:

  • in forward sync, I can ask for "more" slots than already exist, potentially saving round trips - a client could use this to request "all" data at the time of request arrival. consider the following race: A sends request, a new block is produced, B receives request (similar: B starts sending response which takes time, new block is produced).
  • in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.
  • both above races are generally solved by collecting data from broadcasts while syncing (classic subscribe-then-sync pattern) - they are mainly concerns if you don't want to subscribe or want to delay subscribing.
  • in forward sync, I might end up on a different branch / head than I thought I would - the request itself does not point one out

In terms of client implementations, I think of backward sync as biased to make it cheaper for the server: the server already has the data necessary - also because the head is kept hot - while the client has to keep a chain of "unknown" blocks around / can't validate eagerly. An additional rule that the response must be forward-ordered could help the client apply / validate the blocks eagerly.

The backwards sync can be seen as more passive/reactive/lazy while forward sync is more active..

attestations on top of my sync peer's head flying around the network as evidence that their head is valid

right. the assumption rests on several premises (thanks @djrtwo!):

  • honest clients will not propagate invalid data (not signed by a validator they know)
  • there's a slashing condition on creating unviable attestations - there's currently no penalty to create & sign an unviable block so one can perhaps imagine a malicious group of validators creating lots of these and for example spam everyone during important periods "for free". It sounds a bit far fetched though, tbh, to be creating blocks this way - would love to hear thoughts.
  • I've weak-subjectively selected an initial state that contains some validators. I'd primarily look for anything signed by those validators as another heuristic for where to start syncing (even if the validator set might have changed from there).

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

I'm not sure :) I'm curious to hear feedback on this point, but here are some thoughts:

  • it's important that we have a request like Hello to ask for what clients consider to be the head for non-gossiping use cases - but I think that's orthogonal to the sync direction.
  • clients should be free to send that request at any time, not just during the initial negotiation phase
  • direction can be different for request and response - if different, requires a slightly "smarter" server
  • there's a cost for each direction, in terms of implementation. I'd start with one and look for strong motivations before implementing the other, as the returns are not that great. Either direction is sufficient, really.

Copy link
Contributor

@jrhea jrhea Mar 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

It seems reasonable to sync backwards from the latest received gossiped block (at least as an initial implementation)

in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.

Do we really need start_slot? if we give clients the option to request a block by either start_slot or start_root then that forces us to maintain a lookup or search mechanism for both. if we are saying that both fields (start_slot and start_root) required to sync, then I would disagree. we should be able to simply perform a lookup by block_root and walk the chain backwards until we reach max_headers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

latest received gossiped block

or even better, latest gossiped attestation

Do we really need start_slot?

I would say that if we go with backwards sync, we should not implement forwards sync here or elsewhere unless there's a strong case for that direction. Having to implement both directions negates some of the benefits of backward sync and adds implementation surface.

It is quite possible to add forward sync in a later version of the protocol as well should it prove necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or even better, latest gossiped attestation

I can dig that

@arnetheduck or anyone else. Why do we need start_slot

specs/networking/rpc-interface.md Outdated Show resolved Hide resolved
specs/networking/rpc-interface.md Outdated Show resolved Hide resolved
Copy link
Contributor

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few initial thoughts.

specs/networking/messaging.md Outdated Show resolved Hide resolved
specs/networking/messaging.md Outdated Show resolved Hide resolved
Visually, a message looks like this:

```
+--------------------------+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If other comments are accepted, this enveloping can go away.

@@ -0,0 +1,32 @@
ETH 2.0 Networking Spec - Node Identification
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?


For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique.

The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.

specs/networking/node-identification.md Outdated Show resolved Hide resolved
specs/networking/rpc-interface.md Outdated Show resolved Hide resolved
specs/networking/rpc-interface.md Show resolved Hide resolved
jannikluhn and others added 5 commits March 13, 2019 21:52
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>
Co-Authored-By: mslipper <me@matthewslipper.com>

```
(
headers: []BlockHeader
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I note BlockHeader is not defined in the beacon chain spec. I opened a PR to define it as a struct.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's not needed specifically for the spec, we could also just define it here.


## RPC-Over-`libp2p`

To facilitate RPC-over-`libp2p`, a single protocol path is used: `/eth/serenity/rpc/1.0.0`. Remote method calls are wrapped in a "request" structure:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the semantic meaning of these long version numbers?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would imagine it's there because over time it will be bugfixed (bugfix version), updated with a commitment to being backward compatible (minor version), and updated with complete disregard for any backward compatibility, for the sake of progress (major version)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the idea is that they follow semver. In practice I'd estimate that the only time we'd change these version numbers is if there was a backwards-incompatible change to the serialization/compression scheme.

See @raulk's point above re: message envelopes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A version number can either be interpreted or not. If we rely on semver, we should specify the correct behavior for clients: consider client a that supports 1.0.0 - should it also accept 1.0.1 messages as valid automatically, or discard them? This matters for forwards compatibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frankly, I'm in favor of simply having integer version numbers and have a blanket statement that sub-protocol version numbers are neither forwards nor backwards compatible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that would be my preference as well, with how the encoding and protocol looks today.

if we had a serialization format that allowed forwards/backwards compatible additions (ie adding fields) we could maybe consider two-level numbering here, where the first number would be the blanket statement, while the second would signal the version with additional fields added, which would still be compatible with previous clients.

Such an encoding is generally a good thing in the wire use case, which would be a reason to look to extensions to SSZ when used outside consensus (a super-set of SSZ for example).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on integers to signal a generation (generation 1, generation 2...). Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities? @arnetheduck

I would model serialisation format and compression as part of the protocol ID. Then allow Multistream to negotiate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A possible compatible change could be added message types, so I think minor version numbers could be useful in some cases.

Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities?

Capabilities are known from the discovery protocol/ENRs already (but we need to define what types of capabilities we need). So I don't think we need it in the HELLO message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raulk taking a step back, my initial understanding of the libp2p setup was that you would negotiate capabilities with discovery mainly and connect to clients you know you have common ground with - then merely verify the support by signing up to the various streams here and that each stream would be a protocol on its with libp2p dealing with the mulitplexing etc - that has changed now I see, and it looks like there's another layer of protocol negotiation within the stream to discover capabilities - that feels.. redundant, to do the same work twice, and somewhat limiting, because how does a client add a completely new message they want to test or use in some client-specific scenario (for example to establish / evaluate its usefulness) - but it seems I need to reread the newer spec.

I'll be honest though and say that I don't fully understand where the varint would go at this point with the various layers, but integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it. Numbers.. you need a registry and the associated maintenance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arnetheduck my intention isn't to propose any changes here; was just curious to hear the rationale of the Eth2.0 community re: finer-grained protocols vs. coarse-grained protocol with capabilities. We also debate this occasionally in the libp2p community ;-) [libp2p supports both patterns].

Re: semver vs. protocol generations. libp2p does not impose a specific version token (if any). When registering a handler, you can attach a predicate to evaluate the match. So handler X could match N versions, and when receiving the callback, you can inspect the protocol pinned on the stream to infer which abilities you activate, etc.

We've traditionally used semver it in libp2p, IPFS, etc., but a few of us are not convinced of its aptness. Semver is good to convey the severity of changes in APIs, but protocol evolution is a different beast.

You generally strive to keep things backwards compatible, yet enable feature upgrades/opt-ins over time that may not be rolling cumulative, e.g. if 1.14.0 and 1.15.0 introduce feature X and Y respectively, how do I convey that a given peer supports Y but not X?

That's where protocol granularity comes into play: potentially make each feature/message/RPC a different protocol, and track "generations/revisions" of those protocols. libp2p supports that design. A few thoughts:

  • We're evolving Multistream to avoid round trips when you're certain the other party supports a protocol (selection vs negotiation), for example, via an ENR.
  • If two messages have timing dependencies (can't send message B until after A), and they segregated across protocols, it may make state-tracking a bit more complicated.

integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it.

I meant replacing semver by protocol generations, e.g. /eth/serenity/rpc/v10. Sorry for not being clear!

@mslipper
Copy link
Contributor Author

We have a call tomorrow to go over the wire protocol, after which I'll update this PR with the group's decision. Ping me on Gitter if you'd like an invite to the call.

@arnetheduck
Copy link
Contributor

Can someone familiar with libp2p draw a diagram of all payload bytes included in a typical packet, all the way from.. transport (tcp), including envelopes, wrappers, multiplexers etc?

latest_finalized_root: bytes32
latest_finalized_epoch: uint64
best_root: bytes32
best_slot: uint64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slots are based on wall time - what's the best_slot field for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty sure this is supposed to refer to the slot of the head block. Maybe rename best_root and best_slot to head_root and head_slot (or to be even more clear head_block_root/slot)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think head_block_root and head_slot would be clearer

)
```

Send a list of block roots and slots to the requesting peer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which beacon block roots? known heads? all?

)
```

Requests the `block_bodies` associated with the provided `block_roots` from the peer. Responses MUST return `block_roots` in the order provided in the request. If the receiver does not have a particular `block_root`, it must return a zero-value `block_body` (i.e., a `block_body` container with all zero fields).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that when everything is going smoothly, block bodies consist of very few attestations (they should be combined by then), and a few minor items like the transfers etc. has anything looked at the numbers to see how much value there is in having separate requests for headers and bodies? Requesting headers then bodies creates additional round-trips which are a cost on its own.

@zscole
Copy link

zscole commented Mar 18, 2019

For the sake of expediency, we're going to be implementing a few changes in Hobbits to provide an easy and modular mechanism that allows clients to start talking to one another. For the time being, connections will be established via basic TCP with a fixed port (9000 is what we had discussed and agreed upon in the call). We'll be changing the Hobbits spec to assume the default serialization method SSZ and we're going to convert it from being a text to a binary protocol in version 0.2.

latest_finalized_root: bytes32
latest_finalized_epoch: uint64
best_root: bytes32
best_slot: uint64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think head_block_root and head_slot would be clearer

+---+
```

Once the handshake completes, the client with the higher `latest_finalized_epoch` or `best_slot` (if the clients have equal `latest_finalized_epoch`s) SHOULD request beacon block roots from its counterparty via `beacon_block_roots` (i.e., RPC method `10`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this be handled if the clients both have equal latest_finalized_epoch and best_slot ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then you discard the client because you know it's either at genesis or providing invalid data.. finalization happens several epochs behind best_slot in the happy case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arnetheduck ah no, I was referring to if both clients have the same best_slot and finalized_epoch together . So FinalizedEpoch A == FinalizedEpochB and BestSlot A == BestSlot B

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah. right. I find that the situation is somewhat analogous to receiving a block whose parent is unknown to it - you have to make a very similar decision there - the information in this hello, just like in the parentless block, is essentially useless, from a trust perspective, and you need to turn to other sources.

@djrtwo suggested that attestations might be a good heurestic as signing one carries risk for the validator that does so. The information here can, from what I can see, be used to quickly disconnect from a client, if they're saying the network is different. The rest is advisory, and you're hoping for the best.

specs/networking/rpc-interface.md Show resolved Hide resolved
)
```

Client MAY send `goodbye` messages upon disconnection. The reason field MUST be one of the following values:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we still be connected after sync finished ? We would still need to propagate any newly proposed blocks to our peers

@zah
Copy link

zah commented Mar 19, 2019

I've also brought back the message envelope as discussed during the call. This should match what goes into Hobbits version 0.2. See messaging.md for details.

When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as /eth/serenity/beacon/rpc/hello/snappy:ssz

cc @raulk to confirm that this is the best way to do it.

@mslipper
Copy link
Contributor Author

I've also brought back the message envelope as discussed during the call. This should match what goes into Hobbits version 0.2. See messaging.md for details.

When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as /eth/serenity/beacon/rpc/hello/snappy:ssz

cc @raulk to confirm that this is the best way to do it.

Correct, we don't need this enveloping at all with libp2p. However some clients don't have libp2p implementations yet and specifically requested a way to communicate without it in order to test their networking stack. Thus, we need a way of representing compression and serialization that's agnostic of libp2p.

@arnetheduck
Copy link
Contributor

I think this might be a good time to split up this spec:

  • one that focuses on application level protocol: how to get and gossip blocks etc. Basically the SSZ-encoded RPC requests and other payloads.
  • one that focuses on mapping eth2 to a transport layer. the working theory is that this transport layer is libp2p, but it could also be a simpler clear-test testnet protocol some have asked for, or even devp2p or whatever.

then, from the application level protocol we have a list of properties our transport layer must have: version and feature negotiation, broadcast, peer-to-peer etc. for each of these, we look at how they map to the underlying transport.

In summary, we'd have several documents (or sections of appendicec etc):

  • 0-eth2-application.txt - blocks, attestations, etc
  • 0-eth2-libp2p-mapping.txt - peer id format, gossipsub vs floodsub, minimal supported encryption methods, etc - basically a description of how eth uses libp2p specifically, starting from something like Minimum libp2p requirements ethresearch/p2p#4
  • 0-eth2-temporary-mapping.txt - temporary pre-libp2p network for those that feel this is useful.

Some of the coordination issues we see stem from it being pretty early for protocol discussions, specially cross-client ones. Splitting it up will allow us to make progress on the application-level messages without being too distracted by other ongoings.

@dreamcodez
Copy link

dreamcodez commented Mar 19, 2019

@mslipper general current critiques:

  1. regarding the lack of defined response codes, it seems that without defined generic response codes each rpc requires a pair of messages to be defined even if it is just to say 'yes that thing was successfully done'

  2. regarding the lack of protocol extensibility in the form of protocol headers (are we so certain that people working our network protocol will never find our data structures wanting?)

  3. regarding lack of separation of lightweight headers from message payload ; it is not enough to just know which byte offsets to read -- sometimes its useful to know metadata about a message without asking the sender to push the whole thing -- its also useful to make decisions (especially with regards to routing) without unwrapping the payload itself -- this should be in the form of lightweight headers.

@mslipper re: binary protocol 0.2

  1. I would prefer request compression to be one value, and response compressions to be a list (list of compression to be accepted) -- this allows for future experimentation and upgradeability to other compression protocols without breaking shit. (this is of course applicable to just rpc messages)

  2. If we are going to use a single binary byte to represent compression -- please at least specify the most popular 5-10ish codecs' bytecodes.

  3. once again -- headers???

;)

awaits feedback in bunker

@mslipper
Copy link
Contributor Author

mslipper commented Mar 19, 2019

@dreamcodez

regarding the lack of defined response codes, it seems that without defined generic response codes each rpc requires a pair of messages to be defined even if it is just to say 'yes that thing was successfully done'

Note sure what you mean by this... the 'responses' as defined here contain the result of the procedure call or an error. Are you referring to defined error codes?

regarding the lack of protocol extensibility in the form of protocol headers (are we so certain that people working our network protocol will never find our data structures wanting?)

What do you mean by headers in this case? If you want an additional 'command' (in Hobbits parlance, anyway) you'd just need to define a new method_id. Note that the serialization of that command is handled by the message envelope right now, and by upfront negotiation in the future.

regarding lack of separation of lightweight headers from message payload ; it is not enough to just know which byte offsets to read -- sometimes its useful to know metadata about a message without asking the sender to push the whole thing -- its also useful to make decisions (especially with regards to routing) without unwrapping the payload itself -- this should be in the form of lightweight headers.

Can you give an example of the type of metadata you are referring to?

I would prefer request compression to be one value, and response compressions to be a list (list of compression to be accepted) -- this allows for future experimentation and upgradeability to other compression protocols without breaking shit. (this is of course applicable to just rpc messages)

This would mean the introduction of a handshake process, which defeats the purpose of having a simple intermediary wire format to tide us over until libp2p is ready. You can already change the compression/encoding protocols by changing the nibble values in the message envelope.

If we are going to use a single binary byte to represent compression -- please at least specify the most popular 5-10ish codecs' bytecodes.

I'd honestly prefer not to do this, since defining them would create an implicit requirement within this spec to support multiple compression algorithms. I'd much rather we decide on one as a group to reduce implementation overhead. This is why 0x0 (i.e., no compression) is the only defined compression nibble at this time.

@mslipper
Copy link
Contributor Author

I think this might be a good time to split up this spec:

* one that focuses on application level protocol: how to get and gossip blocks etc. Basically the SSZ-encoded RPC requests and other payloads.

* one that focuses on mapping eth2 to a transport layer. the working theory is that this transport layer is libp2p, but it could also be a simpler clear-test testnet protocol some have asked for, or even devp2p or whatever.

then, from the application level protocol we have a list of properties our transport layer must have: version and feature negotiation, broadcast, peer-to-peer etc. for each of these, we look at how they map to the underlying transport.

In summary, we'd have several documents (or sections of appendicec etc):

* 0-eth2-application.txt - blocks, attestations, etc

* 0-eth2-libp2p-mapping.txt - peer id format, gossipsub vs floodsub, minimal supported encryption methods, etc - basically a description of how eth uses libp2p specifically, starting from something like [ethresearch/p2p#4](https://github.com/ethresearch/p2p/issues/4)

* 0-eth2-temporary-mapping.txt - temporary pre-libp2p network for those that feel this is useful.

Some of the coordination issues we see stem from it being pretty early for protocol discussions, specially cross-client ones. Splitting it up will allow us to make progress on the application-level messages without being too distracted by other ongoings.

@arnetheduck I really like this idea... what do you think of doing this separately from this PR? I'm sure that there will be additional feedback generated from those documents.

@FrankSzendzielarz
Copy link
Member

Yes. I think we need flexibility and having aspects like serialisation method separated from message format and semantics will be of more immediate utility than say wire performance.

@arnetheduck
Copy link
Contributor

@arnetheduck I really like this idea... what do you think of doing this separately from this PR? I'm sure that there will be additional feedback generated from those documents.

I think we're pretty close already in terms of how the spec is sectioned (nice work!) - just need to make the distinction slightly more explicit. It's mostly an administrative matter to facilitate discussion, make a dedicated space for separate non-libp2p discussion, and establish terminology once and for all (what's "wire" and "application" and so on). Can be done at any time, whenever feels convenient.

Eventually, I also hope it will lead to a more clearly documented interface between eth2 and the underlying overlay (haha).

@dreamcodez
Copy link

dreamcodez commented Mar 20, 2019

@dreamcodez

regarding the lack of defined response codes, it seems that without defined generic response codes each rpc requires a pair of messages to be defined even if it is just to say 'yes that thing was successfully done'

Note sure what you mean by this... the 'responses' as defined here contain the result of the procedure call or an error. Are you referring to defined error codes?

Omitting generic rpc response codes will require us produce an explicitly coded 'response' message to be paired with each said request.. If message A gets sent via rpc, then its nice to not have to create a message AResponse to indicate the response permutations of 'success' and 'error' at the very least.

regarding the lack of protocol extensibility in the form of protocol headers (are we so certain that people working our network protocol will never find our data structures wanting?)

What do you mean by headers in this case? If you want an additional 'command' (in Hobbits parlance, anyway) you'd just need to define a new method_id. Note that the serialization of that command is handled by the message envelope right now, and by upfront negotiation in the future.

Not referring to commands -- referring to pieces of metadata which our core protocol could not have possibly foreseen use cases for in the future -- for example -- proxies and/or routing topologies will need to tag extra metadata to efficiently push decisions and context to the next hop. rather than having to modify our protocol, they can just adopt new header extensions. Cache invalidation nonces are another use case.

regarding lack of separation of lightweight headers from message payload ; it is not enough to just know which byte offsets to read -- sometimes its useful to know metadata about a message without asking the sender to push the whole thing -- its also useful to make decisions (especially with regards to routing) without unwrapping the payload itself -- this should be in the form of lightweight headers.

Can you give an example of the type of metadata you are referring to?

See above -- I know there are more use cases but I cannot express them at the moment. The main idea is i should be able to grab lightweight metadata when looking up objects to see if i need the full object (maybe it hasn't changed since last time I asked).

I would prefer request compression to be one value, and response compressions to be a list (list of compression to be accepted) -- this allows for future experimentation and upgradeability to other compression protocols without breaking shit. (this is of course applicable to just rpc messages)

This would mean the introduction of a handshake process, which defeats the purpose of having a simple intermediary wire format to tide us over until libp2p is ready. You can already change the compression/encoding protocols by changing the nibble values in the message envelope.
I disagree -- this does not force anyone to handshake in a way which would create any additional overhead -- a rpc requestor peer can simply say 'hey i'm sending this in spdy, and i can receive responses in X,Y,Z' -- if the response compression preference list gives a few choices -- this creates a high probability of interoperability -- especially if the scheme chosen in the initial request is spdy or something non-experimental.

If we are going to use a single binary byte to represent compression -- please at least specify the most popular 5-10ish codecs' bytecodes.

I'd honestly prefer not to do this, since defining them would create an implicit requirement within this spec to support multiple compression algorithms. I'd much rather we decide on one as a group to reduce implementation overhead. This is why 0x0 (i.e., no compression) is the only defined compression nibble at this time.

Let's say that our request looks like:
EWP 0.1 snappy gzip,snappy -- this means that I will be able to work with any endpoint which supports snappy, but i can also support gzip coming back if its available as a higher preference...
I don't really see any drawbacks with supporting this...

Last but not least I reserved codes 406, and 407 to indicate 'request compression not supported' and 'response compression not supported' respectively -- this leaves a way in the future for us to get out of the one-compression trap without having to change the protocol. After the first request, the server will respond with the intersection of the most preferred yet supported protocol of the client requestor for the response payload -- the requestor can then use this knowledge to use its most preferred/supported compression on subsequent requests.

For the time being, the request line to avoid this negotiation the first implementation should just be:

EWP 0.1 snappy snappy

OR

EWP 0.1 none none

to avoid all negotiation issues for now.

@jannikluhn
Copy link
Contributor

I think this might be a good time to split up this spec

I support this. Maybe we can even abstract away the serialization format from the application level protocol in order to isolate another heavily discussed component.

@hwwhww hwwhww added the general:RFC Request for Comments label Mar 21, 2019

### Alternative for Non-`libp2p` Clients

Since some clients are waiting for `libp2p` implementations in their respective languages. As such, they MAY listen for raw TCP messages on port `9000`. To distinguish RPC messages from other messages on that port, a byte prefix of `ETH` (`0x455448`) MUST be prepended to all messages. This option will be removed once `libp2p` is ready in all supported languages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should allow using a separate port. It's entirely possible a client will allow both communication modes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this paragraph should go to the top since it relates to the envelope and port 9000.


```
(
sha: bytes32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't ask when this got drafted, sorry. What is sha here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commit hash of the node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is that? Best root?


## Encoding Nibble Values

- `0x1`: SSZ
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Permission to add 0x2 for BSON please?

Copy link
Contributor

@jannikluhn jannikluhn Mar 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it would be best if we could agree on a single encoding format to maximize compatibility and minimize implementation overhead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would prefer not, for the same reasons articulated on our call - we should agree together on a single encoding scheme.


## Compression Nibble Values

- `0x0`: no compression
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Permission to add 0x1 for snappy compression please?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above - I don't want to commit teams to an implementation without getting consensus.

)
```

Requests a list of block roots and slots from the peer. The `count` parameter MUST be less than or equal to `32768`. The slots MUST be returned in ascending slot order.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some questions regarding the response:

  • Can you skip a slot (e.g., 1, 3, 4)?
  • Can you return less/more than the requested roots?
  • Can you start at higher slot?
  • Can you return a slot outside of the start_slot + count bounds?

Maybe "The slots MUST be returned in ascending slot order." is succinct already? If this is the case we could add something like "The only requirements for roots are ...".

P.S. this is my first comment, thanks for making this doc!

@FrankSzendzielarz
Copy link
Member

FrankSzendzielarz commented Mar 23, 2019 via email

(
id: uint64
response_code: uint16
result: bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just use one structure? You can tell if it's a response by comparing request IDs.


### Beacon Block Bodies

**Method ID:** `12`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just like in LES, I would use different method ids for requests and responses. So it's possible for me to send you proactively blocks and headers using RPC, and you don't need to know about it in advance.

@raulk
Copy link
Contributor

raulk commented Mar 26, 2019

@zah

When libp2p's multistream implementation is upgraded to 2.0 (libp2p/specs#95), each request (a.k.a. new stream in libp2p's lingo) will be identified with a short numeric identifier. Woudn't this value be enough to determine the compression and encoding being used and expected response length? To achieve this in practice, we would just need to define additional protocol identifiers such as /eth/serenity/beacon/rpc/hello/snappy:ssz

cc @raulk to confirm that this is the best way to do it.

You don't need to wait for multistream 2.0. When you open a new stream, multistream 1.0 is in charge of associating that stream with a protocol. To be clear:

  1. When opening a stream, you tell libp2p which protocol P you want it to be associated with via the API.
  2. libp2p opens a stream with the underlying multiplexer (yamux, mplex, spdystream, quic, etc.). At this point, the stream is just a number.
  3. multistream 1.0 kicks in and selects protocol P.
  4. libp2p returns the stream, contextualised for that protocol P.

@arnetheduck
Copy link
Contributor

Generally, I see we've gained a lot of different uintXX sizes, while eth2 has opted to go with uint64 exclusively (almost?) - network/chain/method id etc. The motivation seems to be to save a few bytes here and there, but they look like one-offs in requests and the like, nothing that will fundamentally make an actual difference.

Considering sizes of payloads and taking into consideration the departure from rest of the eth2 spec, how do we feel about sticking with uint64 unless there's strong (aka motivated by compelling numbers) reason not to?

chain_id: uint64
latest_finalized_root: bytes32
latest_finalized_epoch: uint64
best_root: bytes32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking into account the backwards sync suggested elsewhere, and that we can use attestations as a (strong) heuristic that a block is valid and useful, it seems prudent to include (some) attestations here - instead of simply supplying some data like best_root that cannot be trusted anyway, a recent attestation would help the connecting client both with head / fork selection and to know with a higher degree of certainty that the root sent "makes sense" and should be downloaded.

The details of this are TBD - but probably we're looking at something like attestations: [Attestation] where it's up to the client to choose a representative and recent set (or none, which is also fine, because then one can listen to broadcasts).

Copy link
Contributor

@djrtwo djrtwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm merging this. We can take conversation to issues tagged network and to subsequent PRs

@djrtwo djrtwo merged commit bae727a into ethereum:dev Mar 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
general:RFC Request for Comments
Projects
None yet
Development

Successfully merging this pull request may close these issues.