Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: fuzz/invariant benchmarks #3411

Open
mds1 opened this issue Sep 29, 2022 · 7 comments
Open

perf: fuzz/invariant benchmarks #3411

mds1 opened this issue Sep 29, 2022 · 7 comments
Assignees
Labels
A-testing Area: testing C-forge Command: forge T-perf Type: performance
Milestone

Comments

@mds1
Copy link
Collaborator

mds1 commented Sep 29, 2022

Component

Forge

Describe the feature you would like

There are currently no benchmarks for the fuzz and invariant test features, so it's hard to know how good forge actually is compared to other tools like echidna and harvey.

Echidna has a lot of test cases in its repo, maybe those can be used? https://github.com/crytic/echidna/tree/master/tests/solidity

Additional context

No response

@mds1 mds1 added the T-feature Type: feature label Sep 29, 2022
@rkrasiuk rkrasiuk added A-testing Area: testing C-forge Command: forge T-perf Type: performance and removed T-feature Type: feature labels Sep 29, 2022
@mds1
Copy link
Collaborator Author

mds1 commented Feb 27, 2023

Maybe https://blog.trailofbits.com/2023/02/27/reusable-properties-ethereum-contracts-echidna/ would also be useful, e.g. the bug found in ABDK

@0xMelkor
Copy link
Contributor

Hi guys, I'm taking this up.

@0xMelkor
Copy link
Contributor

Hi guys,

I think this issue is strictly dependent on

In fact, making Forge more configurable and rich (in terms of collected metrics) can seriously help the benchmarking process.

I'll continue my investigation, and try to unlock such dependencies.

Peace

@0xMelkor 0xMelkor mentioned this issue Apr 14, 2023
5 tasks
@brockelmore
Copy link
Member

since this was opened, this was created: https://github.com/Consensys/daedaluzz
results: https://consensys.io/diligence/blog/2023/04/benchmarking-smart-contract-fuzzers/

Image

Image

and this: https://twitter.com/agfviggiano/status/1682631396702003200
results:

Image

personally, I would close this and point to these benches

@mds1
Copy link
Collaborator Author

mds1 commented Jul 28, 2023

Sorry, the issue description is unclear here—the intent was to develop benchmarks and include them in CI or maintain the benchmark results somewhere. Equally important to comparing against echidna/etc is having a concrete way of measuring how changes to the forge fuzzer impact it's performance

I agree we don't need to develop our own benchmarks now and can use these, just need to actually integrate them into CI / persist results somehow for comparison when we make future fuzzer changes

@sambacha
Copy link
Contributor

These benchmarks should not be taken seriously other than a superficial comparision.

@0xalpharush
Copy link
Contributor

These are really great guidelines for designing a benchmark https://github.com/fuzz-evaluator/guidelines

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-testing Area: testing C-forge Command: forge T-perf Type: performance
Projects
Status: Todo
Development

No branches or pull requests

7 participants