-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improves benchmarks for pallet parachain-staking #2317
Conversation
// Measured: `27979` | ||
// Estimated: `193037` | ||
// Minimum execution time: 131_740_000 picoseconds. | ||
Weight::from_parts(134_429_600, 193037) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 2x ref time, 6x pov_size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did ref time go up because the benchmark is SCALE decoding more items? Or is there more to it than that?
// Measured: `37308` | ||
// Estimated: `330557` | ||
// Minimum execution time: 168_083_000 picoseconds. | ||
Weight::from_parts(168_083_000, 330557) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 2x ref time, 10x pov_size
// Measured: `29930` | ||
// Estimated: `238138` | ||
// Minimum execution time: 137_954_000 picoseconds. | ||
Weight::from_parts(137_954_000, 238138) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 4x ref time, 4x pov_size
// Measured: `48167` | ||
// Estimated: `426257` | ||
// Minimum execution time: 275_279_000 picoseconds. | ||
Weight::from_parts(275_279_000, 426257) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant for delegate weight. May not fit in a block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per ts-tests we are now able to fit at most 9 delegate calls (was ~50 before).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is true only for the worstcase scenario, which test ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added https://github.com/PureStake/moonbeam/pull/2317/files#diff-007387d2d904cb5a8973816c45ce9b11bbfc8c095d0075798fd625ef0a260141R15 which showed the max txs fit within the block.
Since we're overestimating, the block utilization is affected negatively and the test helps us visualize how much.
Results: #2317 (comment) & #2317 (comment)
// Measured: `15515` | ||
// Estimated: `37960` | ||
// Minimum execution time: 55_538_000 picoseconds. | ||
Weight::from_parts(58_365_791, 37960) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 2x ref time, 5x pov_size
// Measured: `27979` | ||
// Estimated: `193037` | ||
// Minimum execution time: 131_740_000 picoseconds. | ||
Weight::from_parts(134_429_600, 193037) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 2x ref time, 6x pov_size
// Measured: `15515` | ||
// Estimated: `37960` | ||
// Minimum execution time: 56_573_000 picoseconds. | ||
Weight::from_parts(58_214_753, 37960) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 2x ref time, 5x pov_size
// Measured: `29930` | ||
// Estimated: `238138` | ||
// Minimum execution time: 137_954_000 picoseconds. | ||
Weight::from_parts(137_954_000, 238138) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
significant 4x ref time, 4x pov_size
pallets/parachain-staking/src/lib.rs
Outdated
@@ -1133,64 +1075,32 @@ pub mod pallet { | |||
|
|||
/// Temporarily leave the set of collator candidates without unbonding | |||
#[pallet::call_index(11)] | |||
#[pallet::weight(<T as Config>::WeightInfo::go_offline())] | |||
#[pallet::weight(<T as Config>::WeightInfo::go_offline(1_000))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the long term solution for this limit? We could define yet-another runtime config for the max, then it could be safely enforced and used as an upper bound here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could do the same but without a runtime constant (just define a complile-time one). I would suggest using a constant here instead of a magic number anyway.
The question is: do we want to enforce it when joining candidates? It could be used as a DoS vector...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that would be my choice as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would enforce it when joining, but we have to clearly state that limit in the breaking changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added https://github.com/PureStake/moonbeam/pull/2317/files#diff-007387d2d904cb5a8973816c45ce9b11bbfc8c095d0075798fd625ef0a260141R15 which showed the max txs fit within the block.
Since we're overestimating, the block utilization is affected negatively and the test helps us visualize how much.
Results: #2317 (comment) & #2317 (comment)
.saturating_add(T::DbWeight::get().reads((1_u64).saturating_mul(x.into()))) | ||
.saturating_add(T::DbWeight::get().writes(1_u64)) | ||
.saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(x.into()))) | ||
.saturating_add(Weight::from_parts(0, 31269).saturating_mul(x.into())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think 31269.saturating_mul(x)
is going to be problematic here because x = 100
in our runtimes and 31_269 * 100
is way too close to our block limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing to note is that, we have deprecated these methods in favor of batch schedule revoke, so perhaps it's alright for it to have a high enough weight to incentivize people to move away from it.
Additionally now that I notice it this functionality has already passed 6 months of time and should be marked for removal #1760
Coverage generated "Mon Jun 5 11:34:18 UTC 2023": Master coverage: 70.87% |
* Properly use scaling parameter in benchmarks * Include updated weights * Undo execute_leave_delegators_worst changes
@notlesh I added the benchmark test (conditional). Let me know if there's any other extrinsics you'd wanna see up there |
Interesting, I got slightly different results -- maybe it is highly dependent on hardware (which wouldn't surprise me): |
cb: (context: DevTestContext) => Promise<void> | ||
) { | ||
describeDevMoonbeam(`${title} - ${method}`, (context) => { | ||
it("should fit minimum 2", async function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could parameterize this (2) and make these a permanent test. It might add a lot of time to the tests, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a conditional env variable STAKING_BENCHMARK=1
that is checked whether or not to run this test, but I'm all up for parameterizing if it adds value.
f4a91db adds weight and proof Notice that weight is very underutilized, and also that |
Yeah this is a common theme here, certain "old" extrinsics do not have a weigh hint parameter and were historically underestimating the weight. Currently we assume in the upfront weight that a delegator is having |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Can you answer some of the questions left please?
Also please open a ticket for each follow-up tasks that are mentioned in the discussion or the code.
Awesome work !
@@ -59,7 +59,14 @@ describeDevMoonbeam("Staking - Consts - MaxDelegationsPerDelegator", (context) = | |||
context.createBlock( | |||
randomCandidatesChunk.map((randomCandidate) => | |||
context.polkadotApi.tx.parachainStaking | |||
.delegate(randomCandidate.address, MIN_GLMR_DELEGATOR, 1, maxDelegationsPerDelegator) | |||
.delegateWithAutoCompound( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason this was changed to autocompound and not the other delegates (like the previous ) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably missed it changing it back, initially the chunks were failing with Transaction Ancient
error since the txs were generated before the createBlock
and we could only fit at most 8
, so I was instead using delegateWithAutoCompound
since the delegate
method is set for removal now (I didn't wanna remove it in this PR). I can change it back and generate tx within the chunk but it's pretty inconsequential since the next RT should effectively swap out all delegate
calls with delegateWithAutoCompound
Created
/cc @notlesh think I was wrong we never marked |
Seems fine to me |
What does it do?
fixes staking benchmarks to account for worst case weight and possbily refund the excess.
❗ Block Underutilization
The PR changes certain staking benchmark which has a direct effect on how many of these transaction types can fit in a block.
Note that this causes block underutilization since we overestimate the weight for these extrinsics.
❗
MAX_CANDIDATES
benchmark limitThe staking benchmarks utilize a theoretical limit of
200
max candidates, which at the moment IS NOT enforced in the code. However, this is scheduled to be included soon in a subsequent PR. This is purely to compute theWhat important points reviewers should know?
Is there something left for follow-up PRs?
What alternative implementations were considered?
Are there relevant PRs or issues in other repositories (Substrate, Polkadot, Frontier, Cumulus)?
What value does it bring to the blockchain users?