This repository has been archived by the owner on Nov 15, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New Weights for All Pallets #12325
Merged
Merged
New Weights for All Pallets #12325
Changes from 1 commit
Commits
Show all changes
14 commits
Select commit
Hold shift + click to select a range
df73f01
new weights for everything
shawntabrizi 351c6f7
fix
shawntabrizi b3c028f
fmt
shawntabrizi 552a3ae
Merge branch 'master' into shawntabrizi-weights
shawntabrizi 60ac7b4
new batch
shawntabrizi 9c53e10
fmt
shawntabrizi 2d59687
Merge branch 'master' into shawntabrizi-weights
shawntabrizi 0eb0243
new batch
shawntabrizi d4ba91c
Update run_all_benchmarks.sh
shawntabrizi 3c0eabf
add headers
shawntabrizi f408183
Merge branch 'master' into shawntabrizi-weights
shawntabrizi bd1f558
update weights
shawntabrizi 7403cf4
Update lib.rs
shawntabrizi 0795f15
block and extrinsic weight
shawntabrizi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so I think I found an issue @koute @ggwpez
If you see these results, the base weight is now
48_224_000
, and the peri
weight is46_506_091
. This is because eachsr25519
verification is around4X_000_000
weight.Based on the changes here: #11804
We now take the minimum weight among all benchmarks, and make that the y intercept.
In this case, the benchmark runs between
[1, 100]
, which means there was never a case where we tested this benchmark at0
, and thus, the minimum weight is wheni = 1
, and the slope is also this, so we are basically double counting. In this case, the y intercept SHOULD have been near 0, since this function does nothing, but we never tested for that.Potential Solutions:
[0, 100]
rather than[1, 100]
... but there are probably many situations where users do not start their benchmarks at a strictly 0 value.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm.... this is a fair point! Although I don't think we can do much to handle this case automatically without making some sort of a tradeoff which is going to be wrong in certain situations.
There isn't really a way for the benchmarking machinery to know how the benchmark will behave at
0
without actually running it at0
. Sure, it can linearly extrapolate that information, and in this case it would be right! But in other cases it's going to be wrong, depending on whether the data is actually perfectly linear, on the range of input and output values, etc. So to me this is more of a question of which tradeoff do we want to pick. The consequence of the weights being too high is a lower throughput of the chain (which is bad); the consequence of the weights being too low is a potential for a denial of service attack and/or accidental overload of the chain (which I'd argue is worse).Personally I'd probably just go for changing the range of the benchmark to
[0, 100]
. I don't think it's a huge thing to ask the users to run their benchmarks over the whole input domain of their inputs? Especially the minimum and the maximum boundary. If I may exaggerate a little bit and take this to the extreme, supporting the use case of "I want to have this function work in the range of [0, 10000000] while I only benchmark it/test it in the range of [100, 200]" would be quite weird, wouldn't it? So I think it's pretty natural to just require the whole input domain to be covered by the benchmarks.@ggwpez Your thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. The developer of the benchmark has to ensure that he always puts the legal min values in the component range.
We cannot hack around such a shortcoming in FRAME.
Otherwise the benchmarking will assume that
1
is the lowest possible value and therefore use it to calculate the minimum.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have a consensus that we'll go with the "the benchmarks should cover the whole input domain" then I can go through our benchmarks and fix them so that they do start at the minimum, and make a PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I think so. @shawntabrizi any objections?
If you want to - yes sure! Otherwise we can create an issue for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this makes me question the new algorithm we are using, presuming the slowest extrinsic is the base extrinsic weight. I wonder if there are ways we can correct for negative numbers without introducing such a large assumption...
But yeah, i mean also correcting the ranges seems okay for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, one hack we could do is to only do this if the base weight is negative, but this seems like only a partial fix to me. The linear regression can still generate a base weight of zero, or a base weight of
1
, or10
, or100
, or another low number which is lower than the actual base weight if the extrinsic were to be executed with its components set to0
.I definitely agree with you that this is not perfect, and it would be nice to do something better. But I don't really see what that would be without introducing other serious limitations.
Considering the main problem here is that the users might not run their benchmarks at
0
to get a true base weight maybe we could by default emit an error like "the components' range do not cover0
; are you sure this is what you want?" and have a longer explanation of the problem, and allow the users to disable the check with an#[attribute]
of some sort? Then people would either correct their benchmarks, or opt-in that the minimums are actual true minimums. (I'm not entirely sure if this is a good idea though; we'd probably have to empirically test for how many benchmarks per your average chain this would trigger, and how many benchmarks would have to be corrected.)Okay, great! I'll whip up a PR extending the ranges then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All extrinsics have a base weight added to them, so it is possible that an extrinsic with components 0 would give an extrinsic specific weight of 0, and that is okay.