-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Benchmark System::set_code
#13192
Comments
@shawntabrizi said that he had issues back in the days when he tried it, but he didn't really remembered them. |
Can I try it? So we need a codesize-dependent benchmark here? |
Yes. |
@shawntabrizi left this comment. I doubt that I can set up a benchmark if even Shawn failed. But the comment is from 04/2020, maybe something changed and it's easier now. |
You should try some different code sizes of runtimes, but it should not change the behavior of the benchmark. We don't iterate the wasm code or something like this, so the size should not change the time it takes to execute the function. For sure, there is for sure some impact, but I don't think it is measurable or really changes the numbers. |
The |
It is not doing that :P It is reading the version to compare it against the current version. The |
And said decompression time, is it dependent on the content? So if I use a vec with constant data, does it have a different runtime characteristic than a vec with random data? |
I would propose you just collect some runtimes. We have 2 runtimes in Substrate itself, 4 in Polkadot and even more in Cumulus. If you compile all of them and use them for the benchmarking you are probably good. Maybe you could also use Acala&/Moonbeam as I remember that they had relative big runtimes.
Probably, no compression expert. :D |
I don't understand. I should compile and run the benchmarks on my system and just add my determined weight-fn for Or should I collect the runtimes somewhere (commit the blobs to git 🤔 ) and let the benchmarks run on CI hardware like a normal benchmark. Is it really possible to create a good linear regression model with these few datapoints? |
This one.
I meant this as some sort of starting point to get a "feeling" on how the size of the data influences the weight. Later we can still think of adding some fake data to the wasm file or something like that to increase the size. |
Ok, what would be a good place in the repo for the runtimes. |
Sounds good.
Somewhere in the frame-system crate, but we first should find out if we really need by doing some experiments. We can later still come up with a good place for them. |
I ran into a problem. When I want to use a prebuilt binary for the benchmark, I get the error:
This error originates from Is there a tool to manipulate these values in the binary files? |
If you just test them one-by-one, wouldn't it be enough to change your runtime version to match it in the mock.rs? substrate/frame/system/src/mock.rs Line 49 in 3e71d60
|
Or disable the check when |
In the tests it seems to be |
Which falsifies the benchmarks slightly bc. of skipped code. |
If you run the benchmarks in the node you would have to change it in the node runtime then: substrate/bin/node/runtime/src/lib.rs Line 128 in 2dbf625
|
Is it possible to change that in runtime? Isn't it a compiled-in constant. |
Why do you need to change it in the runtime? I thought you were doing one-off testing with different WASM blobs. |
Ah, I missunderstood you. You meant I should change it for crafting my test wasm runtimes? One issue is that the names differ dependent if you run the benchmarks or the benchmark-tests, so one will fail. |
I think the "slowest part" of |
But isn't this a possible footgun? If someone compiles production code with the runtime-benchmarks feature, he accidently disables these validity checks. |
This check is just there to prevent human errors, not a crucial check. People can also use |
Now I have an array of runtime-blobs which I iterate like this in the benchmark setup fn.
But Is it possible to iterate and remap |
The first manually evaluated results (on my weak notebook). Repeat: 20
|
Okay thanks for the results. 65ms is really fast as compared to the 1.5 secs that we currently estimate.
Hm, no its not to my knowledge… but you could map |
So I'd change my bench to just use the kitchensink runtime and we assume that's the maximum size? BTW: Wouldn't it be nice to have more control over the components than just using a range of values. How about adding an additional iterator interface to manually define sample points? |
So if I only benchmark the kitchensink, I could also take the wasm-binary from the What's the canonical way to find the filesystem-path for the runtime? Is there some function to query it or should I use cargo environment variables? |
Yea… there are a lot of things we could improve. But no time to do so 😑
Yea maybe, we only run these tests in Substrate.
The benchmarks run in no-std, so you cannot do file system operations. We probably have to hard-code it and update it from time to time. But then the problem is that the kitchensink will error when used with |
Or we just benchmark it with the Substrate Kitchensink and then hard-code the weight. So all other chains that run the benchmark will get that exact "worst case" result that we measured. |
How should I proceed here? |
You would then disable the checks in Or we hardcode as @ggwpez said. If there isn't that much variance, hardcoding is probably the most easiest way. |
In general I would propose anyway that we put the real weight above |
* Still WIP # Conflicts: # frame/system/src/weights.rs * Still WIP * Add benchmark for system::set_code intrinsic fixes #13192 * Fix format * Add missing benchmark runtime * Fix lint warning * Consume the rest of the block and add test verification after the benchmark * Rewrite set_code function * Try to fix benchmarks and tests * Remove weight tags * Update frame/system/src/tests.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Register ReadRuntimeVersionExt for benches Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix tests * Fix deprecations * Fix deprecations * ".git/.scripts/commands/bench/bench.sh" pallet dev frame_system * Add update info and remove obsolete complexity comments. * ".git/.scripts/commands/fmt/fmt.sh" * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/benchmarking/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * ".git/.scripts/commands/fmt/fmt.sh" * Update README.md Just trigger CI rebuild * Update README.md Trigger CI --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <>
* Still WIP # Conflicts: # frame/system/src/weights.rs * Still WIP * Add benchmark for system::set_code intrinsic fixes paritytech#13192 * Fix format * Add missing benchmark runtime * Fix lint warning * Consume the rest of the block and add test verification after the benchmark * Rewrite set_code function * Try to fix benchmarks and tests * Remove weight tags * Update frame/system/src/tests.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Register ReadRuntimeVersionExt for benches Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Fix tests * Fix deprecations * Fix deprecations * ".git/.scripts/commands/bench/bench.sh" pallet dev frame_system * Add update info and remove obsolete complexity comments. * ".git/.scripts/commands/fmt/fmt.sh" * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * Update frame/system/benchmarking/src/lib.rs Co-authored-by: Bastian Köcher <git@kchr.de> * ".git/.scripts/commands/fmt/fmt.sh" * Update README.md Just trigger CI rebuild * Update README.md Trigger CI --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: command-bot <>
Currently we hard-code weights for
System::set_code
andSystem::set_code_without_checks
:This is inflexible and always results in un-schedulable runtime upgrades. We should benchmark them instead.
cc @gavofyork
The text was updated successfully, but these errors were encountered: