-
Notifications
You must be signed in to change notification settings - Fork 684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[subsystem-benchmarks] Add statement-distribution benchmarks #3863
[subsystem-benchmarks] Add statement-distribution benchmarks #3863
Conversation
0976b23
to
3b4f88e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, looks good to me, good job!
Left you some nit comments.
("Received from peers", 108.8000, 0.001), | ||
("Sent to peers", 123.8180, 0.001), | ||
])); | ||
messages.extend(average_usage.check_cpu_usage(&[("statement-distribution", 0.0390, 0.1)])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, unless we are missing something here, can we concur that statement-distribution is not a subsystem with high resource(CPU + networking) demands ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it feels so, at least the signature checks seem damn fast compared to approval vote signatures :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least now we know about it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it feels so, at least the signature checks seem damn fast compared to approval vote signatures :)
It is actually way less signature checks. For each candidate only the backing group would generate a statement so for polkadot you get 5 signature per candidate and with 100 candidates you check just 500 signatures in total.
In approval voting you need at least 30 votes(in practice more than that). per candidate so 30 signature per candidate and with 100 candidates you actually have to check 100 * 30 = 3000 signatures, that's around 6 times more signatures.
At least now we know about it
Yes, it was meant as a good news :D.
/// Generates peer_connected messages for all peers in `test_authorities` | ||
pub fn generate_peer_connected(&self) -> Vec<AllMessages> { | ||
pub fn generate_approval_distribution_peer_connected(&self) -> Vec<AllMessages> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: To not repeat these 2 functions you can name it something like
generate_subsystem_peer_connected(&self, lambda_to_transform_peer_connected_subsystem_specific_message)
and then call the lambda function in the map.
|
||
const LOG_TARGET: &str = "subsystem-bench::candidate-backing-mock"; | ||
|
||
pub struct MockCandidateBacking { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I would go with keeping the current convention where clearly define a CandidateBackingState structure, that gets included here.
.chain(two_hop_y_peers_and_groups) | ||
{ | ||
let messages_sent_count = messages_tracker.get_mut(group_index).unwrap(); | ||
if *messages_sent_count { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too sure about this, my understanding is that a node could receive a statement originating in a group multiple times from different nodes, so I think we should let this send that message as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to get the messages same as in real network, so all have to go.
("Received from peers", 108.8000, 0.001), | ||
("Sent to peers", 123.8180, 0.001), | ||
])); | ||
messages.extend(average_usage.check_cpu_usage(&[("statement-distribution", 0.0390, 0.1)])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it feels so, at least the signature checks seem damn fast compared to approval vote signatures :)
orchestra::FromOrchestra::Communication { msg } => { | ||
gum::trace!(target: LOG_TARGET, msg=?msg, "recv message"); | ||
|
||
match msg { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd extract this to a function to help with readbility.
_block_hash, | ||
RuntimeApiRequest::AsyncBackingParams(sender), | ||
) => { | ||
let _ = sender.send(Ok(AsyncBackingParams { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these are not the production parameters that we use, but I guess doesn't really matter for this bench
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but better to set the production ones
.chain(two_hop_y_peers_and_groups) | ||
{ | ||
let messages_sent_count = messages_tracker.get_mut(group_index).unwrap(); | ||
if *messages_sent_count { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to get the messages same as in real network, so all have to go.
})); | ||
}, | ||
RuntimeApiMessage::Request(_parent, RuntimeApiRequest::Version(tx)) => { | ||
tx.send(Ok(RuntimeApiRequest::DISABLED_VALIDATORS_RUNTIME_REQUIREMENT)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to return the latest version, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least a version that supports DisabledValidators
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Thank you @AndreiEres
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left an idea but overall looks good, nice work!
loop { | ||
let manifests_count = state | ||
.manifests_tracker | ||
.values() | ||
.filter(|v| v.load(Ordering::SeqCst)) | ||
.collect::<Vec<_>>() | ||
.len(); | ||
gum::debug!(target: LOG_TARGET, "{}/{} manifest exchanges", manifests_count, candidates_advertised); | ||
|
||
if manifests_count == candidates_advertised { | ||
break; | ||
} | ||
tokio::time::sleep(Duration::from_millis(50)).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm reading the code correctly, the manifests we are advertising already contain statements from all members of the backing group.
Therefore the only gossip that is emulated is the manifest/acknowledgement exchange. The statement gossip is not really stressing the subsystem.
In reality every backing group could continue sending Statement messages even after the backed candidate is fetched. (and we care to record these so that they can be punished in case of a dispute).
Maybe a manifest is advertised with only 2 backing votes but the backing group keeps issuing more votes even after that.
However, the code is already quite complicated and we can see that the CPU consumption of the subsystem is tiny, so I don't think implementing this will yield any benefit right now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @alindima ! A ticket describing this improvement should do justice!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decided with @alindima, that in this specific case it doesn't make a big difference
* master: (93 commits) Fix broken windows build (#4636) Beefy client generic on aduthority Id (#1816) pallet-staking: Put tests behind `cfg(debug_assertions)` (#4620) Broker new price adapter (#4521) Change `XcmDryRunApi::dry_run_extrinsic` to take a call instead (#4621) Update README.md (#4623) Publish `chain-spec-builder` (#4518) Add omni bencher & chain-spec-builder bins to release (#4557) Moves runtime macro out of experimental flag (#4249) Filter workspace dependencies in the templates (#4599) parachain-inherent: Make `para_id` more prominent (#4555) Add metric to measure the time it takes to gather enough assignments (#4587) Improve On_demand_assigner events (#4339) Conditional `required` checks (#4544) [CI] Deny adding git deps (#4572) [subsytem-bench] Remove redundant banchmark_name param (#4540) Add availability-recovery from systematic chunks (#1644) Remove workspace lints from templates (#4598) `sc-chain-spec`: deprecated code removed (#4410) [subsystem-benchmarks] Add statement-distribution benchmarks (#3863) ...
…ech#3863) Fixes paritytech#3748 Adds a subsystem benchmark for statements-distribution subsystem. Results in CI (reference hw): ``` $ cargo bench -p polkadot-statement-distribution --bench statement-distribution-regression-bench --features subsystem-benchmarks [Sent to peers] standart_deviation 0.07% [Received from peers] standart_deviation 0.00% [statement-distribution] standart_deviation 0.97% [test-environment] standart_deviation 1.03% Network usage, KiB total per block Received from peers 1088.0000 108.8000 Sent to peers 1238.1800 123.8180 CPU usage, seconds total per block statement-distribution 0.3897 0.0390 test-environment 0.4715 0.0472 ```
…ech#3863) Fixes paritytech#3748 Adds a subsystem benchmark for statements-distribution subsystem. Results in CI (reference hw): ``` $ cargo bench -p polkadot-statement-distribution --bench statement-distribution-regression-bench --features subsystem-benchmarks [Sent to peers] standart_deviation 0.07% [Received from peers] standart_deviation 0.00% [statement-distribution] standart_deviation 0.97% [test-environment] standart_deviation 1.03% Network usage, KiB total per block Received from peers 1088.0000 108.8000 Sent to peers 1238.1800 123.8180 CPU usage, seconds total per block statement-distribution 0.3897 0.0390 test-environment 0.4715 0.0472 ```
Fixes #3748
Adds a subsystem benchmark for statements-distribution subsystem.
Results in CI (reference hw):