-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More granular control on invariant simulations #5018
Comments
cc @grandizzy feel free to close if not accurate anymore given rework or turn into an actionable ticket |
this makes sense but would need some more discussions on best UX and impl as it could get to complex configuration (there should most likely be a per invariant config, smth like below where one could specify how specific selectors should apply - weight, run depth to start and end fuzzing, etc ) [invariant]
runs = 100
fail_on_revert = false
[invariant.invariant_test1]
runs = 10
depth = 1000
[invariant.invariant_test1.Target.selector1]
weight = 10
start_depth = 500
end_depth = 700
[invariant.invariant_test1.Target.selector2]
weight = 50
start_depth = 0
end_depth = 1000 CCing @mds1 for insights |
Ah start and end depth are very interesting. I feel like this can be split into two tasks, one for weighting one for depths. Since depths are config dependent I think it makes sense for them to live in the TOML, and I think that above config would make sense. I think the weighting should be something simple like this in solidity:
Where weight is arbitrary numbers (functionName1 is 25% in this example). This give flexibility for people to express how they want. At Maple we did something like this manually and it became tedious to always refactor to make sure everything added up to 100%. Just my 2 cents. |
Invariant test UX is already confusing for many users and I’m worried about this making it even less approachable. Once a feature is added people start to rely on it, and it becomes ~impossible to remove. Are we sure this is the best solution to the problem? Maybe there are ways we can make the fuzzer smarter to e.g. discard reverting calls and stop calling a method if it’s determined to always revert at a certain point. It also seems like this can be solved with well designed handler contracts? For example, if |
Component
Forge
Describe the feature you would like
In order to reduce the number of reverts in a simulated run it would be very helpful to be able to assign different weights (probability to be called) to functions of a given target contract. Furthermore it would be helpful to configue the run for some functions to only be called towards the end of the run or be called with more probability toward the end of a run. For example when running invariants against a protocol that has loan cycles where certain functions in the system are only to be called after certain other actions have taken place, it would be helpful to control the order of call sequence to a certain degree. Otherwise the amount of reverts are extremely high . Does anything like this exist or has there been a discussion around it? I could not find anything in the docs
Additional context
No response
The text was updated successfully, but these errors were encountered: