Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bench: Don't pause/resume timer #383

Merged
merged 1 commit into from
Jun 25, 2020
Merged

bench: Don't pause/resume timer #383

merged 1 commit into from
Jun 25, 2020

Conversation

chfast
Copy link
Collaborator

@chfast chfast commented Jun 9, 2020

Most of the benchmark cases don't have init memory at all - this is handled by a separate benchmark loop where init_memory() is not used at all.

For the remaining cases (mul256, eli_interpreter) the init memory is very small (max 105 bytes for an eli_interpreter input). It is not worth to pause and resume the timer as

PauseTiming()/ResumeTiming() are relatively heavyweight, and so their use should generally be avoided within each benchmark iteration, if possible.

fizzy/execute/blake2b/512_bytes_rounds_1_mean                     -0.0212         -0.0213            92            90            92            90
fizzy/execute/blake2b/512_bytes_rounds_16_mean                    -0.0179         -0.0179          1378          1354          1378          1354
fizzy/execute/ecpairing/onepoint_mean                             -0.0114         -0.0114        495862        490227        495865        490232
fizzy/execute/keccak256/512_bytes_rounds_1_mean                   +0.0041         +0.0040           109           110           109           110
fizzy/execute/keccak256/512_bytes_rounds_16_mean                  +0.0027         +0.0027          1605          1609          1605          1609
fizzy/execute/memset/256_bytes_mean                               -0.0623         -0.0631             8             8             8             8
fizzy/execute/memset/60000_bytes_mean                             -0.0150         -0.0150          1682          1657          1682          1657
fizzy/execute/mul256_opt0/input0_mean                             +0.0129         +0.0127            29            29            29            29
fizzy/execute/mul256_opt0/input1_mean                             +0.0121         +0.0119            29            29            29            29
fizzy/execute/sha1/512_bytes_rounds_1_mean                        -0.0259         -0.0260            99            97            99            97
fizzy/execute/sha1/512_bytes_rounds_16_mean                       -0.0108         -0.0108          1360          1345          1360          1345
fizzy/execute/sha256/512_bytes_rounds_1_mean                      -0.0108         -0.0109            98            97            98            97
fizzy/execute/sha256/512_bytes_rounds_16_mean                     -0.0089         -0.0089          1351          1339          1351          1339
fizzy/execute/micro/eli_interpreter/halt_mean                     -0.8269         -0.8277             1             0             1             0
fizzy/execute/micro/eli_interpreter/exec105_mean                  -0.0803         -0.0815             6             5             6             5
fizzy/execute/micro/factorial/10_mean                             -0.4272         -0.4293             1             1             1             1
fizzy/execute/micro/factorial/20_mean                             -0.2747         -0.2766             2             1             2             1
fizzy/execute/micro/fibonacci/24_mean                             -0.0064         -0.0064         10202         10136         10202         10136
fizzy/execute/micro/host_adler32/1_mean                           -0.7817         -0.7834             1             0             1             0
fizzy/execute/micro/host_adler32/100_mean                         -0.0625         -0.0632             7             6             7             6
fizzy/execute/micro/host_adler32/1000_mean                        +0.0036         +0.0035            63            63            63            63
fizzy/execute/micro/spinner/1_mean                                -0.9216         -0.9219             0             0             0             0
fizzy/execute/micro/spinner/1000_mean                             -0.0497         -0.0502            11            11            11            11

@chfast chfast requested review from axic and gumb0 June 9, 2020 16:59
@axic
Copy link
Member

axic commented Jun 10, 2020

Will this not make it harder to compare 0.2.0 against 0.1.0? Or did we introduce this pausing since 0.1.0?

@chfast
Copy link
Collaborator Author

chfast commented Jun 12, 2020

Will this not make it harder to compare 0.2.0 against 0.1.0? Or did we introduce this pausing since 0.1.0?

I will need to backport bench changes to 0.1 before comparison.

Copy link
Member

@axic axic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming you create 0.1.1 with that backport?

@chfast
Copy link
Collaborator Author

chfast commented Jun 25, 2020

Assuming you create 0.1.1 with that backport?

Yes, can be done like that when other release is ready. Or do it locally and never go back to 0.1.

@codecov
Copy link

codecov bot commented Jun 25, 2020

Codecov Report

Merging #383 into master will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master     #383   +/-   ##
=======================================
  Coverage   99.32%   99.32%           
=======================================
  Files          42       42           
  Lines       12808    12814    +6     
=======================================
+ Hits        12722    12728    +6     
  Misses         86       86           

@chfast chfast merged commit 71397c7 into master Jun 25, 2020
@chfast chfast deleted the bench branch June 25, 2020 08:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants