Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Network stream events shouldn't be cloned #14599

Open
sandreim opened this issue Jul 20, 2023 · 7 comments
Open

Network stream events shouldn't be cloned #14599

sandreim opened this issue Jul 20, 2023 · 7 comments
Labels
I3-bug The node fails to follow expected behavior. I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task.

Comments

@sandreim
Copy link
Contributor

Currently, the NetworkWorker creates one clone per event stream listener in https://github.com/paritytech/substrate/blob/master/client/network/src/service/out_events.rs#L219 .

For example, in a simple Versi test with 200 validators and 40 parachains we get around a bit above 1MB/s of traffic/events from the network:
Screenshot 2023-07-20 at 12 30 23

Considering that there 5 such listeneres, I think this is a big problem with the current design, as in practice 5 additional copies are created wasting memory and CPU for free, raising the total internal traffic to more than 5MB/s.
Screenshot 2023-07-20 at 12 41 08

We should strive for a 0 copy approach in our internal network stack architecture, for example we could use Bytes crate or just a simple Arc to avoid the unneeded extra copies. Additionally we could also do the per protocol filtering at lower level, instead of doing it in all of the consumers of these messages. It might worth looking into optimizing the protocol name check per messages as it seems wasteful since these are big strings based on genesis hashes. On connection we could negotiate some shorter name, or just an integer and use that instead.

FWIW this likely contributes to the CPU usage pattern we observe, see paritytech/polkadot-sdk#702 .

@sandreim sandreim added I3-bug The node fails to follow expected behavior. I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task. labels Jul 20, 2023
@sandreim
Copy link
Contributor Author

CC @altonen @tomaka

@tomaka
Copy link
Contributor

tomaka commented Jul 20, 2023

(I'm no longer working for Parity)

On connection we could negotiate some shorter name, or just an integer and use that instead.

Please don't change the wire protocol names for what should be an internal fix. The protocol name is sent only once, when the substream is opened. The fact that subsequently the protocol name is compared at every message is an implementation design choice of Substrate.

@altonen
Copy link
Contributor

altonen commented Jul 20, 2023

This is addressed in the upcoming refactoring, we just have to fix other more higher-priority issues but I should get back to that task soon, hopefully.

@sandreim
Copy link
Contributor Author

@tomaka

(I'm no longer working for Parity)

I know that, but I just felt that you might have some feedback on this :)

On connection we could negotiate some shorter name, or just an integer and use that instead.

Please don't change the wire protocol names for what should be an internal fix. The protocol name is sent only once, when the substream is opened. The fact that subsequently the protocol name is compared at every message is an implementation design choice of Substrate.

I am not implying to change the wire names, but fix the internal design choice.

@altonen it's good to hear that this is being considered in the refactoring. Not trying to add any pressure or change your priorities right away, but is there any branch we could test already with this improvement ? I would prefer to test it as a PoC sooner rather than later, just so we know how much it helps when scaling up the number of paravalidators/cores. Currently we are focusing to reduce the network messages we send just to avoid the CPU usage issues.

@altonen
Copy link
Contributor

altonen commented Jul 20, 2023

The Substrate side is ready but the Polkadot companion is still WIP but that's also close to being ready. The issue is that the Polkadot networking code is quite heavily leaning on the concept that all messages are read and written using the same object which clashes with the design so it requires some refactoring.

I'll let you know as soon as there is even a preliminary implementation that works which shouldn't take too long.

@vstakhov
Copy link
Contributor

Is there any WIP PR to look at?

@altonen
Copy link
Contributor

altonen commented Jul 20, 2023

@vstakhov #14197

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
I3-bug The node fails to follow expected behavior. I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task.
Projects
None yet
Development

No branches or pull requests

4 participants