Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ghali/benchmark data root #204

Merged
merged 3 commits into from
Sep 1, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions pallets/dactr/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -204,6 +204,36 @@ benchmarks! {
let _data_root =submitted_data::extrinsics_root::<T::SubmittedDataExtractor, _>(once(&opaque));
}

// This benchmark is not directly used by extrinsic.
// It is mostly used to check that the weight is lower or approximately equal the `data_root` benchmark
data_root_batch {
let i in 0..(2 * 1024 * 1024);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i should be greater than max_tx_size, and we should use the max_block_len (not sure about this name)

Suggested change
let i in 0..(2 * 1024 * 1024);
let i in max_tx_size..max_block_len;

It would be nice if the step is max_tx_size too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, i go from 0 to 2mb et make n transactions from it, so it will benchmark the creation of dataroot until the maximum that we can put in a block.

Why should we go from max_tx_size to max_block_len which is basically the same from 512kb to 2mb ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also do we have somewhere this max_block_len ? I'm not sure we have something equivalent in the code...


let max_tx_size = T::MaxAppDataLength::get();
let nb_full_tx = i / max_tx_size;
let remaining_size = i % max_tx_size;
let nb_additional_tx = if remaining_size > 0 { 1 } else { 0 };
let mut calls = Vec::with_capacity(nb_full_tx as usize + nb_additional_tx as usize);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
let nb_additional_tx = if remaining_size > 0 { 1 } else { 0 };
let mut calls = Vec::with_capacity(nb_full_tx as usize + nb_additional_tx as usize);
let mut calls = Vec::with_capacity(nb_full_tx as usize + 1usize);


// Create the full-sized transactions
for _ in 0..nb_full_tx {
let data = generate_bounded::<AppDataFor<T>>(max_tx_size);
let opaque = submit_data_ext::<T>(data);
calls.push(opaque);
}

// If there is a remaining size, create one more transaction
if remaining_size > 0 {
let data = generate_bounded::<AppDataFor<T>>(remaining_size);
let opaque = submit_data_ext::<T>(data);
calls.push(opaque);
}
}:{
// TODO: Here the `.iter()` is also accounted in the benchmark. It's unwanted
// and should be avoidable when we'll use the new benchmarking syntax.
let _data_root = submitted_data::extrinsics_root::<T::SubmittedDataExtractor, _>(calls.iter());
}

commitment_builder_32{
let i in 32..T::MaxBlockRows::get().0;
let (txs, root, block_length, block_number, seed) = commitment_parameters::<T>(i, 32);
Expand Down
31 changes: 23 additions & 8 deletions pallets/dactr/src/weights.rs
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ pub trait WeightInfo {
fn submit_block_length_proposal() -> Weight;
fn submit_data(i: u32, ) -> Weight;
fn data_root(i: u32, ) -> Weight;
fn data_root_batch(i: u32, ) -> Weight;
fn commitment_builder_32(i: u32, ) -> Weight;
fn commitment_builder_64(i: u32, ) -> Weight;
fn commitment_builder_128(i: u32, ) -> Weight;
Expand Down Expand Up @@ -84,10 +85,17 @@ impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> {
}
/// The range of component `i` is `[0, 524288]`.
fn data_root(i: u32, ) -> Weight {
// Minimum execution time: 609 nanoseconds.
Weight::from_ref_time(672_000_u64)
// Standard Error: 12
.saturating_add(Weight::from_ref_time(4_321_u64).saturating_mul(i as u64))
// Minimum execution time: 1_457 nanoseconds.
Weight::from_ref_time(1_487_000_u64)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know if this increase is related with the change of the hashing algorithm on data root calculation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes i was confirmed that it's not from the changes in the algorithm.

Also benchmarking like this is quite not stable meaning we can get different result on retry.

// Standard Error: 21
.saturating_add(Weight::from_ref_time(5_251_u64).saturating_mul(i as u64))
}
/// The range of component `i` is `[0, 2097152]`.
fn data_root_batch(i: u32, ) -> Weight {
// Minimum execution time: 778 nanoseconds.
Weight::from_ref_time(825_000_u64)
// Standard Error: 5
.saturating_add(Weight::from_ref_time(4_931_u64).saturating_mul(i as u64))
}
/// The range of component `i` is `[32, 1024]`.
fn commitment_builder_32(i: u32, ) -> Weight {
Expand Down Expand Up @@ -145,10 +153,17 @@ impl WeightInfo for () {
}
/// The range of component `i` is `[0, 524288]`.
fn data_root(i: u32, ) -> Weight {
// Minimum execution time: 609 nanoseconds.
Weight::from_ref_time(672_000_u64)
// Standard Error: 12
.saturating_add(Weight::from_ref_time(4_321_u64).saturating_mul(i as u64))
// Minimum execution time: 1_457 nanoseconds.
Weight::from_ref_time(1_487_000_u64)
// Standard Error: 21
.saturating_add(Weight::from_ref_time(5_251_u64).saturating_mul(i as u64))
}
/// The range of component `i` is `[0, 2097152]`.
fn data_root_batch(i: u32, ) -> Weight {
// Minimum execution time: 778 nanoseconds.
Weight::from_ref_time(825_000_u64)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is weird that one data_root takes 1_487 as minimum but several data_root takes 824 (less).
I mean It should be equal or greater, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess when the range is greater the results are smoother.

// Standard Error: 5
.saturating_add(Weight::from_ref_time(4_931_u64).saturating_mul(i as u64))
}
/// The range of component `i` is `[32, 1024]`.
fn commitment_builder_32(i: u32, ) -> Weight {
Expand Down
Loading