Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added chunk impl for parallel sender recovery #5622

Closed
wants to merge 1 commit into from

Conversation

Arindam2407
Copy link
Contributor

Resolving issue #5189

Creating new pull request, I was not able to run the mainnet SenderRecovery stage because of my device specs. Instead, I ran optimized tests using cargo test --package reth-primitives --release to get more reliable measurement as pointed out by @rakita. I compared the performance before/after (without/with chunks) for recovering 100,000 transactions.

The test:

Additional crates used in mod tests:

use std::{fs::File, io::Write, time::Instant};
use rayon::prelude::*;

    #[test]
    fn benchmark_chunks_parallel_sender_recovery_100000(txes in proptest::collection::vec(proptest::prelude::any::<Transaction>(), *PARALLEL_SENDER_RECOVERY_THRESHOLD * 20000)) {
        let mut rng =rand::thread_rng();
        let secp = Secp256k1::new();
        let txes: Vec<TransactionSigned> = txes.into_iter().map(|mut tx| {
             if let Some(chain_id) = tx.chain_id() {
                // Otherwise we might overflow when calculating `v` on `recalculate_hash`
                tx.set_chain_id(chain_id % (u64::MAX / 2 - 36));
            }

            let key_pair = KeyPair::new(&secp, &mut rng);

            let signature =
                sign_message(B256::from_slice(&key_pair.secret_bytes()[..]), tx.signature_hash()).unwrap();

            TransactionSigned::from_transaction_and_signature(tx, signature)
        }).collect();

        let mut file = File::create("benchmark.txt")?;

        let start_chunk = Instant::now();
        let _par_chunk_senders = TransactionSigned::recover_signers(&txes, txes.len());
        let elapsed_chunk = start_chunk.elapsed();

        let start = Instant::now();
        let _par_senders = txes.into_par_iter().map(|tx| tx.recover_signer()).collect::<Option<Vec<_>>>();
        let elapsed = start.elapsed();

        write!(file, "Time taken for parallel recovery of 100000 addresses: {:.2?} (with chunks) and {:.2?} (without chunks)", elapsed_chunk, elapsed)?;

    }

The results:
benchmark_run_1
benchmark_run_2
benchmark_run_3
benchmark_run_4
benchmark_run_5

Chunk implementation takes 20-30% less time than the current implementation in all runs. It would be great if anyone else could run it and compare the performance. Since they are all optimized test runs, I think the estimates are reliable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant