-
Notifications
You must be signed in to change notification settings - Fork 2.6k
benches: disable caching per default #12232
Conversation
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After thinking about this more. I'm not sure how it worked before?
And when we disable the cache, wouldn't this hurt the other benchmarks which rely on the cache working?
@@ -105,7 +105,7 @@ pub struct StorageParams { | |||
/// Trie cache size in bytes. | |||
/// | |||
/// Providing `0` will disable the cache. | |||
#[clap(long, default_value = "1024")] | |||
#[clap(long, default_value = "0")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#[clap(long, default_value = "0")] | |
#[clap(long, default_value = "1024")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise it would not get enabled when you set enable_trie_cache
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After thinking about this more. I'm not sure how it worked before?
I am not sure either. Maybe the state-cache just did not have such a large influence as the Trie cache now does.
But yea, the former results should already be wrong due to state-caching. I did not notice this earlier, as --state-cache-size
did not make a noticeable difference.
And when we disable the cache, wouldn't this hurt the other benchmarks which rely on the cache working?
Well, we need to measure worst case performance… so a second iteration of a benchmark should not rely on a former iteration. None of the benches should actually rely on a cache.
Otherwise it would not get enabled when you set enable_trie_cache.
I did not put a --enable-trie-cache
for the storage
command, since the trie-cache-size
was not inherited from the import commands. But I will add it for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait I think this is bytes, will change to 67108864
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we need to measure worst case performance… so a second iteration of a benchmark should not rely on a former iteration. None of the benches should actually rely on a cache.
Makes sense, but how do we then handle values which we assume of already being present? Aka the whitelisted values? Do we just somehow remove them from the formular that gives us the weight?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we just somehow remove them from the formular that gives us the weight?
For pallet weights: Yes.
substrate/client/db/src/bench.rs
Lines 524 to 531 in 7de3d78
fn read_write_count(&self) -> (u32, u32, u32, u32) { | |
let mut reads = 0; | |
let mut repeat_reads = 0; | |
let mut writes = 0; | |
let mut repeat_writes = 0; | |
self.all_trackers().iter().for_each(|tracker| { | |
if !tracker.whitelisted { |
That could be written a bit nicer with a
.filter
🙈
For storage weights: Does not matter, since we need the average read/write weight for a key, regardless of whitelist, since whitelisted values are also read once.
For BlockExecutionWeight
and ExtrinsicBaseWeight
: No, since this is where we want to account for the cost of whitelisted keys, so they are included here.
Co-authored-by: Bastian Köcher <info@kchr.de>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
bot merge |
* Disable cache for storage benches Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Disable caching per default Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update utils/frame/benchmarking-cli/src/storage/cmd.rs Co-authored-by: Bastian Köcher <info@kchr.de> * Add --enable-trie-cache to 'storage' command Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Bastian Köcher <info@kchr.de>
* Disable cache for storage benches Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Disable caching per default Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update utils/frame/benchmarking-cli/src/storage/cmd.rs Co-authored-by: Bastian Köcher <info@kchr.de> * Add --enable-trie-cache to 'storage' command Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Bastian Köcher <info@kchr.de>
Spotted some large weight decrease for
BlockExecutionWeight
on the Polkadot release branch v0.9.29 here.It comes from the new Trie cache #11407 feature which has unforeseen influence on the weights here.
This MR disables caching per default and makes enabling it explicit.