-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Extend the lower bounds of some of the benchmarks to also include 0
#12386
Extend the lower bounds of some of the benchmarks to also include 0
#12386
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you
after this is merged, i will run everything again, and see what else we can find, but this is probably solving most of the issues :)
No need to thank me; it was my PR so it's my responsibility to see it through and get it up to a standard of quality that I (we) expect, or die trying. (: I should be thanking you for being patient with me.
One major group of benchmarks that I didn't touch and which are potentially problematic (before my PR at least on my machine at least one of those was generating with a weight of zero) were the benches in
With the bounds actually defined in /// The numbers configured here could always be more than the the maximum limits of staking pallet
/// to ensure election snapshot will not run out of memory. For now, we set them to smaller values
/// since the staking is bounded and the weight pipeline takes hours for this single pallet.
pub struct ElectionProviderBenchmarkConfig;
impl pallet_election_provider_multi_phase::BenchmarkingConfig for ElectionProviderBenchmarkConfig {
const VOTERS: [u32; 2] = [1000, 2000];
const TARGETS: [u32; 2] = [500, 1000];
const ACTIVE_VOTERS: [u32; 2] = [500, 800];
const DESIRED_TARGETS: [u32; 2] = [200, 400];
const SNAPSHOT_MAXIMUM_VOTERS: u32 = 1000;
const MINER_MAXIMUM_VOTERS: u32 = 1000;
const MAXIMUM_TARGETS: u32 = 300;
} So the thing is, from what I can see there is an internal invariant for those benchmarks that a) Deal with this in the benchmarks one-by-one and just use Main downside of (a) is that it will still generate some numbers for the values of components which don't make sense, which I'm pretty sure will negatively affect the results when done on this scale. So I think (b) would be preferable here, even though it would require introducing some extra complexity in the benchmarking machinery. (Although I suppose we don't need to support arbitrary constraints; just very simple ones as needed here.) @shawntabrizi Does this sound reasonable to you? Also, and this is only tangentially related, I am intrigued by the comment that the election pallet is extremely slow and takes hours to run. Now I kinda want to see if it'd be possible to make it faster. (: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should merge this if it already improves the situation.
Maybe @kianenigma can tell us if we can somehow improve the constants in the multi-phase benches?
Can we merge this @koute ? |
@ggwpez Ah sorry, I somehow missed that you +1'd it! Yes, we can! I'll just resolve the conflicts. |
bot merge |
…paritytech#12386) * Extend the lower bounds of some of the benchmarks to also include `0` * Fix verify snippet for `pallet_bounties/spend_funds`
This PR extends the lower bounds of the components of some of the benchmarks to also include
0
.This is, in a way, a followup of #11806 where I've made it so that the base weights are always set to be equal to the minimum execution time. That fixed a problem where (due to how linear regression works) the base weights could sometimes be zero, but at a cost: some of the weights could now be too high since their benchmarks were never actually executed over the full range of the inputs (e.g see this comment #12325 (comment))
So here I went through our benchmarks and modified those which components started at
1
to now start at0
. I didn't do this blindly nor exhaustively (we still have a lot of such benchmarks); I did it either in places where it made sense, or in places where #12325 has shown that there might potentially be a problem. I mostly looked at the benchmarks where the DB reads/writes changed, and at those benchmarks where the base weights changed drastically.I ran every benchmark I modified and there should not be any runtime errors generated by the lowered bounds. Most of the changes are mostly trivial.