Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

client/network-gossip: Redesign validator #6335

Open
mxinden opened this issue Jun 11, 2020 · 3 comments
Open

client/network-gossip: Redesign validator #6335

mxinden opened this issue Jun 11, 2020 · 3 comments
Labels
I7-refactor Code needs refactoring. U2-some_time_soon Issue is worth doing soon. Z4-involved Can be fixed by an expert coder with good knowledge of the codebase.

Comments

@mxinden
Copy link
Contributor

mxinden commented Jun 11, 2020

Status quo

A gossip Validator is called by a GossipEngine to validate gossip messages. It can decide whether a given message should be:

  1. Passed to the higher layer and propagated to other nodes.

  2. Passed to the higher layer but not propagated to other nodes.

  3. Discarded

In order to make these decisions it needs to be aware of the state of the upper layer gossip message consumer. Thus a validator communicates to two components, (1) the GossipEngine and (2) the upper layer consumer.

validator

Reproduce

https://bramp.github.io/js-sequence-diagrams/

UpperLayer->GossipValidator: new()
UpperLayer->GossipEngine: new(network, gossip_validator)
UpperLayer->GossipEngine: Subscribe to topic X
GossipEngine->UpperLayer: channel<Message>
UpperLayer->GossipValidator: Regularly: Push information needed\n for validation.
Network->GossipEngine: New message
GossipEngine->GossipValidator: validate(message)
GossipValidator->GossipEngine: validation_result
GossipEngine->UpperLayer: Valid message

All methods of the Validator trait are synchronous in the sense that they either return imidiately or block the underlying OS thread. Communication between the 3 components (upper layer, validator, gossip engine) is thus encouraged to happen by wrapping the validator in an Arc<Mutex<>> and having both upper layer and gossip engine hold a reference (see finality-granpda).

Sharing a gossip validator via an Arc<Mutex<>> is problematic because:

  1. Both sides (upper layer, gossip engine) operate via green threading (Rust futures). Using conventional locking can block a thread and thus block all tasks running on the same thread.

  2. There is no way for the validator to exercise back-pressure, while it can simply return an error when being overloaded there is no way for the validator to notify the caller that it is ready to receive work again.

  3. In case a validator is not just a pure validator, but actually handles a lot more (e.g. sending neighbor packets) there is no way for a validator to communicate with other components other than conventional Arc<Mutex<>> primitives or futures::channel::mpsc unbounded channels.

One can not use a futures channel as the validator has no way of yielding control to another task (return Poll::Pending). One can not use a std::sync::mpsc channel as one would otherwise block the entire OS thread and thus block all tasks running on the same thread.

Possible alternatives

Making Validator methods async

One could make the methods of the Validator trait async. This would enable a validator to exercise back-pressure and be able to talk to other components (even though I doubt the latter is a good idea in the first place).

Adjustments needed between Validator and GossipEngine are doable, as GossipEngine never clones its Validator and implements Future.

Adjustments needed between Validator and e.g. finality-grandpa are hard as validator methods are called within functions that can not easily yield (e.g. note_set, note_round, note_commit_finalized, ...) and the validator being cloned in many places.

Removing the concept of a Validator

no-validator

Reproduce

https://bramp.github.io/js-sequence-diagrams/

UpperLayer->GossipEngine: new(network, gossip_validator)
UpperLayer->GossipEngine: Subscribe to topic X
GossipEngine->UpperLayer: channel<Message>
Network->GossipEngine: New message
GossipEngine->UpperLayer: New message
UpperLayer->GossipEngine: Please propagate message

Instead of having the GossipEngine ask a Validator to validate a message to then pass it on to an upper layer, instead remove the concept of a Validator and have the GossipEngine always pass messages to the upper layer, which validates the message and passes it (or a message id) back to the GossipEngine if it should be propagated further (libp2p gossipsub uses the same pattern).

This reduces (1) the amount of components, (2) the amount of needed communication channels and (3) the amount of concurrency.

Adjustments needed on the GossipEngine side are easy as instead of propagating after validation it then propagates on request by the upper layer.

Adjustments needed e.g. in finality-grandpa are hard as (1) finality-grandpa does more than validating messages in its Validator (e.g. neighbor packets) and (2) both Validator as well as GossipEngine are cloned and thus shared many times across different finality-grandpa components.

Priority for this issue

While I find this to be definitely worth fixing, especially as more and more Polkadot components are building on top of this, there are more urgent things to do, e.g.:

@mxinden mxinden added I7-refactor Code needs refactoring. M4-core U2-some_time_soon Issue is worth doing soon. Z4-involved Can be fixed by an expert coder with good knowledge of the codebase. labels Jun 11, 2020
@rphmeier
Copy link
Contributor

rphmeier commented Jun 11, 2020

There should also be an opportunity to write something different alongside the existing code so we aren't required to rewrite finality-grandpa as a precondition for the refactoring to go in.

Moving everything up to the higher layer introduces some complications, as you basically have to reimplement all the gossip logic at the higher level if you want different peers to receive different messages based on their perceived protocol state. To avoid that, we'd need these properties:

  • The gossip pool should track which peers have seen which messages
  • A utility where we can make a request: pass all messages under this topic to this peer, but use this predicate to filter them
  • A utility where we can make a request: file this message under this topic and attempt to propagate to all peers, filtering with this predicate

We'd also need some stuff for triggering predicate-gated message repropagations (analogous to BroadcastReason in current implementation)

@mxinden
Copy link
Contributor Author

mxinden commented Jun 12, 2020

There should also be an opportunity to write something different alongside the existing code so we aren't required to rewrite finality-grandpa as a precondition for the refactoring to go in.

True. We could as well introduce client/network-gossip-v2 to be used by Polkadot and have finality-grandpa use client/network-gossip (for now). From that point on we would need to maintain two gossip systems. I don't think that would reduce but much rather increase the current maintenance burden.


Moving everything up to the higher layer introduces some complications, as you basically have to reimplement all the gossip logic at the higher level if you want different peers to receive different messages based on their perceived protocol state.

Are you referring to the "Removing the concept of a Validator" suggestion here? My description might be misleading. I am not suggesting to move all the logic of `client/network-gossip/src/state_machine.rs into the respective upper layers. The gossip system would still

  1. Keep track of "which peers have seen which message".

  2. Allow the upper layer to send all messages of a given topic to a given peer.

  3. Allow the upper layer to send a given message to all peers.

In order to make (2) and (3) aware of the upper layer protocol state one can for example have GossipEngine::multicast take a closure similar to Validator::message_allowed to only send messages to peers that need the message (politeness). (Even though I am not yet sure whether a closure might introduce borrow issues.)

@infinity0
Copy link
Contributor

There is no way for the validator to exercise back-pressure, while it can simply return an error when being overloaded there is no way for the validator to notify the caller that it is ready to receive work again.

I just want to point out that an alternative way to implement backpressure is not to "notify the caller" (if I understand correctly, the caller here is the Upper Layer), but rather for the caller to place all outgoing items on a bounded priority buffer.

If the producer attempts to add to a full buffer, old obsolete items are dropped using producer/application-specific logic - since only the producer (i.e. Upper Layer using the terminology of this PR) knows precisely what is the best item to drop.

The consumer takes items from the buffer in priority order, with the priority also being defined by the producer/application, whenever it is ready to receive work again. This can either be via polling the buffer, or via a blocking read of the buffer (which is effectively a notification in terms of OS primitives, but I gather you meant "notify" in a more general / application-level sense).

Using this idea, all tricky synchronisation issues are pushed into the buffer implementation. Then it is also unnecessary for components to hold Arc references to each other.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
I7-refactor Code needs refactoring. U2-some_time_soon Issue is worth doing soon. Z4-involved Can be fixed by an expert coder with good knowledge of the codebase.
Projects
None yet
Development

No branches or pull requests

4 participants