-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Fully Automated Benchmarking and Weight Generation #6168
Comments
I see this should generate a runtime weight description file (in whatever format suitable) and with #5966 we can attach this file with our runtime and it overrides all build-in values. |
Previously we talked about adding weight methods to Currency trait. but this strategy is not explained here in How do you see that inside ?
about automation, I see difficulties with refund but we can still do manual refund in case automation can't be achieved. |
@thiolliere I updated my post a little bit, let me know if this helps. Ultimately the automation would result in one generated weight formula per benchmark. If we wanted to add weights to all the methods in the Currency trait, we would need to write a benchmark for all of them. This would result in a simple formula that we can place that would return the weight of this function. The assumptions on the benchmarking should stay the same and we should always be testing for worst case scenario. In the case of refunds, most cases I would guess we can use the same benchmarking formula and just modify the parameter inputs. so whereas we initially put:
Then on refund we would return:
In the case we need a new formula, we should create a new benchmark for that logical path:
Can you elaborate? I didnt understand this point |
One minor glitch is that once we weight-ify all the inter-pallet traits (such as currency) and have a benchmark for both #[wright = weights::weight_for_transfer(a, b) + weights::weight_for_currency_transfer(a, b)]
fn transfer(a, b) {
// other stuff
T::Currency::transfer(a, b)
// more stuff
} We could in theory write a code parses that scans the code and warns you if you are using a All in all, I wanted to point out for calls that have these nested, additional weights due to their usage of traits, we always need to keep an eye on the code as well. Rest of it can be automated though. Honestly, after reading my comment I am not quite sure if my understanding of how a I think my example is wrong and whatever the weight of But then I wonder why do we even need to benchmark |
@kianenigma good point. Actually I have no idea how to address this in our benchmarks currently. One crazy solution would be to have the trait functions be The other one could be that we subtract weights to find the real value:
|
Yeah, this is probably what we'll need to do. So we'll likely want to somehow figure out those internal calls into the Otherwise we'll be effectively fixing the weights corresponding to the Might be that we need to have some tracking code in these inter-pallet traits' impls to help automate this during benchmarking. Otherwise it'll likely be down to manual annotation. |
But why would we do it? Is it "just" so we can calculate the corrected weight (post dispatch) when the
Please make this generation optional. For the contracts pallet those won't correlate with the dispatchables of the pallet and therefore we need a custom |
Session, Elections Phragmen, and Multisig are yet to be merged, but everything seems to be converted. |
Long term goal of the benchmarking/weight effort should be to automate the whole process for the runtime engineer.
This means, if the runtime developer writes an accurate set of benchmarks for their extrinsics, that they can run some simple commands and it should do all the heavy lifting for benchmarking their runtime and creating the proper weight formulas.
Overview
In order to provide full end to end automation of this process we need to automate the following:
WeightInfo
trait to all pallets (AddWeightInfo
to all pallets with benchmarks. #6575)decl_module
macro to automatically generate theWeightInfo
traitWeightInfo
WeightInfo
#6610 (companion Companion for #6610 (Balances Weight Trait) polkadot#1425).Benchmarking Runtime Execution
This process is already automated with our current benchmarking pipeline. We execute the extrinsic given some initial setup and collect data about the results of that benchmark. These data points are then put through a linear regression analysis which tells us the linear formula for this extrinsic. Currently, this information is outputted as text to console, but in this end to end pipeline, we need to extract this data and use it for generating the weight formulas.
Benchmarking DB Operations
Currently we benchmark DB operations through an external process which views the
state
logs while executing the benchmarks. With this we are able to see the DB operations that take place during the execution.Additionally, we have special filters which take into account unique reads/writes to DB keys, and also whitelists certain keys from counting again the weight of an extrinsic. For example, if an extrinsic reads from a storage key more than once, we only count the first read as a DB operation. Anything else would be "in-memory". If we write to a key, then read from it, we only count the "write" operation, as the read would then be free. If we read/write to common storage items like events, the caller account, etc... we count these as free as we know these are already accounted for in other weight stuff.
We may need to add a special DB overlay to accurately track the DB reads and writes as well as implement a Hash table so we can remove duplicate reads/writes and add any other fancy logic we want like a whitelist. This should all be enabled only for benchmarks so that normal node operation does not have this overhead.
Generating Weight Formulas and Appropriate Rust Files
Finally once we have this data, we need to automate a process that puts it all together in a usable way.
The output of this automated process should be a rust module
weights.rs
for each pallet.Each benchmark written will generate an equivalent
weight_for_*
formula.For example if I have:
would result in:
Then when integrating the weight information into our runtime, we simply write:
This would also work for any piecewise weight calculations like so:
When we modify logic and need to update the weights, we simply run the pipeline again, and these formulas with the same name will simply be updated to represent the new weight.
The text was updated successfully, but these errors were encountered: