-
Notifications
You must be signed in to change notification settings - Fork 710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
litep2p: Update network backend to v0.7.0 #5609
Conversation
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
The CI pipeline was cancelled due to failure one of the required jobs. |
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, but some things should be improved.
@@ -557,6 +554,12 @@ impl<B: BlockT + 'static, H: ExHashT> NetworkBackend<B, H> for Litep2pNetworkBac | |||
.with_libp2p_ping(ping_config) | |||
.with_libp2p_identify(identify_config) | |||
.with_libp2p_kademlia(kademlia_config) | |||
.with_connection_limits( | |||
// By default litep2p accepts only two connections per peer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need multiple connections per peer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Litep2p accepts at most 2 connections with the remote peer to handle cases when both peers dial each other simultaneously. For libp2p, we configure this manually to 2 connections
for similar reasons:
polkadot-sdk/substrate/client/network/src/lib.rs
Lines 294 to 301 in d55b093
/// The maximum allowed number of established connections per peer. | |
/// | |
/// Typically, and by design of the network behaviours in this crate, | |
/// there is a single established connection per peer. However, to | |
/// avoid unnecessary and nondeterministic connection closure in | |
/// case of (possibly repeated) simultaneous dialing attempts between | |
/// two peers, the per-peer connection limit is not set to 1 but 2. | |
const MAX_CONNECTIONS_PER_PEER: usize = 2; |
RejectReason::ConnectionClosed => "connection-closed".to_string(), | ||
RejectReason::SubstreamClosed => "substream-closed".to_string(), | ||
RejectReason::SubstreamOpenError(substream_error) => { | ||
format!("substream-open-error: {:?}", substream_error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want this as a metric name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep that's a good point, have remove the String allocation entirely and provided shorter str
names, thanks 🙏
Some((RequestFailure::NotConnected, "not-connected".to_string())), | ||
RequestResponseError::Rejected(reason) => { | ||
let reason = match reason { | ||
RejectReason::ConnectionClosed => "connection-closed".to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also better to use a Cow<'static, str>
here. (Less allocations)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have removed the allocations entirely, since we don't really need to expose each variant of format!("substream-open-error: {:?}", substream_error)
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
I've changed a bit two things since last review:
We got around 4.7k warnings from hickory:
The fix similar to: paritytech/substrate#12253 (disable logging for this crate). Local node triaging (litep2p)
Other warnings:
This has resurfaced the Local node triaging (libp2p)
Other warnings:
Versi network testing
Manual triaging (until sub-triage-logs gains access to loki):
Warnings appeared after the versi-net was scaled down from 100 to 20 validators Saturaday, roughly at |
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
This pull request has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/litep2p-network-backend-updates/9973/1 |
This release introduces several new features, improvements, and fixes to the litep2p library. Key updates include enhanced error handling, configurable connection limits, and a new API for managing public addresses. For a detailed set of changes, see [litep2p changelog](https://github.com/paritytech/litep2p/blob/master/CHANGELOG.md#070---2024-09-05). This PR makes use of: - connection limits to optimize network throughput - better errors that are propagated to substrate metrics - public addresses API to report healthy addresses to the Identify protocol ### Warp sync time improvement Measuring warp sync time is a bit inaccurate since the network is not deterministic and we might end up using faster peers (peers with more resources to handle our requests). However, I did not see warp sync times of 16 minutes, instead, they are roughly stabilized between 8 and 10 minutes. For measuring warp-sync time, I've used [sub-trige-logs](https://github.com/lexnv/sub-triage-logs/?tab=readme-ov-file#warp-time) ### Litep2p Phase | Time -|- Warp | 426.999999919s State | 99.000000555s Total | 526.000000474s ### Libp2p Phase | Time -|- Warp | 731.999999837s State | 71.000000882s Total | 803.000000719s Closes: paritytech#4986 ### Low peer count After exposing the `litep2p::public_addresses` interface, we can report to litep2p confirmed external addresses. This should mitigate or at least improve: paritytech#4925. Will keep the issue around to confirm this. ### Improved metrics We are one step closer to exposing similar metrics as libp2p: paritytech#4681. cc @paritytech/networking ### Next Steps - [x] Use public address interface to confirm addresses to identify protocol --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
This reverts commit 12eeb5d.
This release introduces several new features, improvements, and fixes to the litep2p library. Key updates include enhanced error handling, configurable connection limits, and a new API for managing public addresses. For a detailed set of changes, see [litep2p changelog](https://github.com/paritytech/litep2p/blob/master/CHANGELOG.md#070---2024-09-05). This PR makes use of: - connection limits to optimize network throughput - better errors that are propagated to substrate metrics - public addresses API to report healthy addresses to the Identify protocol Measuring warp sync time is a bit inaccurate since the network is not deterministic and we might end up using faster peers (peers with more resources to handle our requests). However, I did not see warp sync times of 16 minutes, instead, they are roughly stabilized between 8 and 10 minutes. For measuring warp-sync time, I've used [sub-trige-logs](https://github.com/lexnv/sub-triage-logs/?tab=readme-ov-file#warp-time) Phase | Time -|- Warp | 426.999999919s State | 99.000000555s Total | 526.000000474s Phase | Time -|- Warp | 731.999999837s State | 71.000000882s Total | 803.000000719s Closes: #4986 After exposing the `litep2p::public_addresses` interface, we can report to litep2p confirmed external addresses. This should mitigate or at least improve: #4925. Will keep the issue around to confirm this. We are one step closer to exposing similar metrics as libp2p: #4681. cc @paritytech/networking - [x] Use public address interface to confirm addresses to identify protocol --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
This release introduces several new features, improvements, and fixes to the litep2p library. Key updates include enhanced error handling, configurable connection limits, and a new API for managing public addresses.
For a detailed set of changes, see litep2p changelog.
This PR makes use of:
Warp sync time improvement
Measuring warp sync time is a bit inaccurate since the network is not deterministic and we might end up using faster peers (peers with more resources to handle our requests). However, I did not see warp sync times of 16 minutes, instead, they are roughly stabilized between 8 and 10 minutes.
For measuring warp-sync time, I've used sub-trige-logs
Litep2p
Libp2p
Closes: #4986
Low peer count
After exposing the
litep2p::public_addresses
interface, we can report to litep2p confirmed external addresses. This should mitigate or at least improve: #4925. Will keep the issue around to confirm this.Improved metrics
We are one step closer to exposing similar metrics as libp2p: #4681.
cc @paritytech/networking
Next Steps