-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ghali/benchmark data root #204
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||
---|---|---|---|---|---|---|---|---|
|
@@ -204,6 +204,36 @@ benchmarks! { | |||||||
let _data_root =submitted_data::extrinsics_root::<T::SubmittedDataExtractor, _>(once(&opaque)); | ||||||||
} | ||||||||
|
||||||||
// This benchmark is not directly used by extrinsic. | ||||||||
// It is mostly used to check that the weight is lower or approximately equal the `data_root` benchmark | ||||||||
data_root_batch { | ||||||||
let i in 0..(2 * 1024 * 1024); | ||||||||
|
||||||||
let max_tx_size = T::MaxAppDataLength::get(); | ||||||||
let nb_full_tx = i / max_tx_size; | ||||||||
let remaining_size = i % max_tx_size; | ||||||||
let nb_additional_tx = if remaining_size > 0 { 1 } else { 0 }; | ||||||||
let mut calls = Vec::with_capacity(nb_full_tx as usize + nb_additional_tx as usize); | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit:
Suggested change
|
||||||||
|
||||||||
// Create the full-sized transactions | ||||||||
for _ in 0..nb_full_tx { | ||||||||
let data = generate_bounded::<AppDataFor<T>>(max_tx_size); | ||||||||
let opaque = submit_data_ext::<T>(data); | ||||||||
calls.push(opaque); | ||||||||
} | ||||||||
|
||||||||
// If there is a remaining size, create one more transaction | ||||||||
if remaining_size > 0 { | ||||||||
let data = generate_bounded::<AppDataFor<T>>(remaining_size); | ||||||||
let opaque = submit_data_ext::<T>(data); | ||||||||
calls.push(opaque); | ||||||||
} | ||||||||
}:{ | ||||||||
// TODO: Here the `.iter()` is also accounted in the benchmark. It's unwanted | ||||||||
// and should be avoidable when we'll use the new benchmarking syntax. | ||||||||
let _data_root = submitted_data::extrinsics_root::<T::SubmittedDataExtractor, _>(calls.iter()); | ||||||||
} | ||||||||
|
||||||||
commitment_builder_32{ | ||||||||
let i in 32..T::MaxBlockRows::get().0; | ||||||||
let (txs, root, block_length, block_number, seed) = commitment_parameters::<T>(i, 32); | ||||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -51,6 +51,7 @@ pub trait WeightInfo { | |
fn submit_block_length_proposal() -> Weight; | ||
fn submit_data(i: u32, ) -> Weight; | ||
fn data_root(i: u32, ) -> Weight; | ||
fn data_root_batch(i: u32, ) -> Weight; | ||
fn commitment_builder_32(i: u32, ) -> Weight; | ||
fn commitment_builder_64(i: u32, ) -> Weight; | ||
fn commitment_builder_128(i: u32, ) -> Weight; | ||
|
@@ -84,10 +85,17 @@ impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> { | |
} | ||
/// The range of component `i` is `[0, 524288]`. | ||
fn data_root(i: u32, ) -> Weight { | ||
// Minimum execution time: 609 nanoseconds. | ||
Weight::from_ref_time(672_000_u64) | ||
// Standard Error: 12 | ||
.saturating_add(Weight::from_ref_time(4_321_u64).saturating_mul(i as u64)) | ||
// Minimum execution time: 1_457 nanoseconds. | ||
Weight::from_ref_time(1_487_000_u64) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we know if this increase is related with the change of the hashing algorithm on data root calculation? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes i was confirmed that it's not from the changes in the algorithm. Also benchmarking like this is quite not stable meaning we can get different result on retry. |
||
// Standard Error: 21 | ||
.saturating_add(Weight::from_ref_time(5_251_u64).saturating_mul(i as u64)) | ||
} | ||
/// The range of component `i` is `[0, 2097152]`. | ||
fn data_root_batch(i: u32, ) -> Weight { | ||
// Minimum execution time: 778 nanoseconds. | ||
Weight::from_ref_time(825_000_u64) | ||
// Standard Error: 5 | ||
.saturating_add(Weight::from_ref_time(4_931_u64).saturating_mul(i as u64)) | ||
} | ||
/// The range of component `i` is `[32, 1024]`. | ||
fn commitment_builder_32(i: u32, ) -> Weight { | ||
|
@@ -145,10 +153,17 @@ impl WeightInfo for () { | |
} | ||
/// The range of component `i` is `[0, 524288]`. | ||
fn data_root(i: u32, ) -> Weight { | ||
// Minimum execution time: 609 nanoseconds. | ||
Weight::from_ref_time(672_000_u64) | ||
// Standard Error: 12 | ||
.saturating_add(Weight::from_ref_time(4_321_u64).saturating_mul(i as u64)) | ||
// Minimum execution time: 1_457 nanoseconds. | ||
Weight::from_ref_time(1_487_000_u64) | ||
// Standard Error: 21 | ||
.saturating_add(Weight::from_ref_time(5_251_u64).saturating_mul(i as u64)) | ||
} | ||
/// The range of component `i` is `[0, 2097152]`. | ||
fn data_root_batch(i: u32, ) -> Weight { | ||
// Minimum execution time: 778 nanoseconds. | ||
Weight::from_ref_time(825_000_u64) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is weird that one There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess when the range is greater the results are smoother. |
||
// Standard Error: 5 | ||
.saturating_add(Weight::from_ref_time(4_931_u64).saturating_mul(i as u64)) | ||
} | ||
/// The range of component `i` is `[32, 1024]`. | ||
fn commitment_builder_32(i: u32, ) -> Weight { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i
should be greater thanmax_tx_size
, and we should use themax_block_len
(not sure about this name)It would be nice if the step is
max_tx_size
tooThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, i go from 0 to 2mb et make n transactions from it, so it will benchmark the creation of dataroot until the maximum that we can put in a block.
Why should we go from max_tx_size to max_block_len which is basically the same from 512kb to 2mb ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also do we have somewhere this max_block_len ? I'm not sure we have something equivalent in the code...