Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Muhash parallel reduce -- optimize U3072 mul when LHS = one #581

Merged
merged 5 commits into from
Oct 13, 2024

Conversation

michaelsutton
Copy link
Contributor

The following PR tightens performance of parallel muhash reduction by adding a special condition within the inner U3072::mul. Results suggest that using multiple threads strongly scales now, as oppose to prior to this change where much of the gain was countered by the increased number of inner mul ops.

The optimization is to short-circuit and self assign other (RHS) when LHS is one. This case is especially frequent during parallel reduce operation where the identity (one) is used for each sub-computation (at the LHS).

Benchmarks before 09d6679:

Benchmarking muhash txs/muhash seq: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 9.0s, enable flat sampling, or reduce sample count to 50.
muhash txs/muhash seq   time:   [1.7592 ms 1.7634 ms 1.7676 ms]
                        change: [+0.5061% +0.8974% +1.2930%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
  3 (3.00%) high mild
muhash txs/muhash par 8 time:   [478.03 µs 479.56 µs 480.95 µs]
                        change: [+0.3087% +0.7933% +1.2579%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) low mild
muhash txs/muhash par 16
                        time:   [394.97 µs 397.13 µs 399.10 µs]
                        change: [-0.9620% -0.0286% +0.8966%] (p = 0.95 > 0.05)
                        No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) low mild
  1 (1.00%) high mild
muhash txs/muhash par 32
                        time:   [476.49 µs 486.11 µs 497.24 µs]
                        change: [-3.0751% -0.5378% +2.0900%] (p = 0.69 > 0.05)
                        No change in performance detected.
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) high mild
  3 (3.00%) high severe

and after:

Benchmarking muhash txs/muhash seq: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 8.8s, enable flat sampling, or reduce sample count to 50.
muhash txs/muhash seq   time:   [1.7494 ms 1.7541 ms 1.7584 ms]
                        change: [-1.5057% -1.1099% -0.7332%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
  4 (4.00%) high mild
  1 (1.00%) high severe
muhash txs/muhash par 8 time:   [334.88 µs 335.69 µs 336.46 µs]
                        change: [-29.835% -29.499% -29.127%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  1 (1.00%) low mild
  3 (3.00%) high mild
  2 (2.00%) high severe
muhash txs/muhash par 16
                        time:   [287.39 µs 288.74 µs 290.02 µs]
                        change: [-27.467% -26.816% -26.179%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 14 outliers among 100 measurements (14.00%)
  7 (7.00%) low mild
  6 (6.00%) high mild
  1 (1.00%) high severe
muhash txs/muhash par 32
                        time:   [358.20 µs 361.12 µs 364.06 µs]
                        change: [-25.664% -24.294% -22.887%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  5 (5.00%) high mild
  2 (2.00%) high severe

Copy link
Collaborator

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran the benchmarks and it looks like a solid ~30% improvement

Copy link
Member

@elichai elichai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good :)

@michaelsutton michaelsutton merged commit 0df2de5 into kaspanet:master Oct 13, 2024
6 checks passed
@michaelsutton michaelsutton deleted the muhash-new-opt branch October 13, 2024 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants