-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(forge): Add internal metrics capability #3607
Comments
cc @onbjerg @mattsse I'm still a bit abstract on this but I was thinking of exposing an API via cheatcodes similar to https://docs.rs/metrics/latest/metrics/, and on the CLI we would collect all these metrics and custom log them in a table |
@lucas-manuel can you give an example of how your ideal reporting would look like? a table? something else? maybe there should be plots like this https://docs.rs/tui/latest/tui/widgets/struct.Chart.html for stuff on how it evolves over time (e.g. price as volatility happens)? makes me think that this may allow creating automatic simulation / stress test reports like Gauntlet Network does. |
@gakonst Yeah personally I think the lowest hanging fruit would be to log in a table similar to Going forward though for the more sophisticated use cases we discussed it would be interesting to export to some sort of JSON that could be used to generate more visual reporting (could be added to CI as an artifact for example). |
@FrankieIsLost @transmissions11 had some thoughts on this which I'd love if they shared in the thread :) |
Great idea to give users programmable insight into their invariant campaigns, been a big advocate of this for a small while now. IMO giving devs a better understanding of the coverage of their invariant tests and tools to effectively debug them is far more valuable than building smarter fuzzers after a certain point, because once a weakness is identified its not too hard to guide the fuzzer towards it, as opposed to a genius fuzzer thats a total black box from a dev's perspective, which offers little insight into how secure a piece of code is and whether a dev can be confident in the coverage of the run. Humans and fuzzers should work tandem! In terms of actual design:
WDYT? |
In addition to these more complex metrics, it would be very helpful to see a breakdown of calls/reverts by target contract + selector. For example, something like:
This would be very helpful for writing and debugging new actor contracts. |
Love that Gearbox chart 👀 Yeah I agree that it would be better to design this without the need to persist storage within the contracts anywhere for this purpose. I like the suggestions of What are our next steps here? I also completely agree with the summary table idea @horsefacts |
Second gakonst for using Prometheus as the engine/standard and then we can either:
|
@lucas-manuel Until this issue gets implemented, do you know any way to output the call summary only at the end of the invariant tests run? I saw that in your function invariant_call_summary() external view {
console.log("\nCall Summary\n");
///
} But, if I understand this correctly, this test function will be executed at the end of each run, which will slow down test runs. Is there any way to output the call summary at the end of the invariant tests run? As ar as I know, there is no "after all" hook in Foundry (a function opposite to |
Yeah it'll slow it down, but a bit only I would assume. We should probably add that kind of |
It depends on how many
Would be super helpful. |
considering @transmissions11 comment - I think a nice and easy way to have such metrics is by using OpenTelemetry (OTLP) open source standard for logs, metrics, and traces as it already provides crates to facilitate such integration, see rust bindings and crates A big pro of this is that we can easily integrate with forge and support not only Prometheus but other tools at the same time by
Didn't put too much thoughts on UX but at a first call there'll be
lmk what you guys think about this approach, think a PoC with such can be done quite easy but wouldn't spend time on it if not of interest. thanks |
I made a quick PoC, see https://github.com/grandizzy/foundry/tree/metrics/crates/metrics/examples#metrics-demo for reference (Code adds a metrics crate, there's no config yet, cheatcodes not in their own metrics group and better UX needed - for a quick view of code changes pls see grandizzy@919e84b)
List of exporters that can be used by otel config file can be found here I see three use cases / dimensions that can be accomplished by having such
For example: metrics are collected and exported to AWS Kinesis, or Apache Kafka, etc. then campaign metrics are processed and
any feedback appreciated, thank you |
@Evalir we can continue discussion here, here a quick overview of what the changes to have such metrics would be master...grandizzy:foundry:metrics see also prev comments re sample and other use cases this could be used at. |
Adding on OpenTelemetry adoption / usage - Grafana just announced their open source collector (Grafana Alloy) https://grafana.com/blog/2024/04/09/grafana-alloy-opentelemetry-collector-with-prometheus-pipelines/ |
Component
Forge
Describe the feature you would like
When running invariant tests with an actor-based pattern, there is currently a lack of visibility for:
Having a cheatcode that would allow storing this information with arbitrary keys would be useful to be able to render this info in a summary table at the end of a fuzzing campaign.
@gakonst mentioned [prometheus] metrics as a good reference for this.
Examples:
Additional context
No response
The text was updated successfully, but these errors were encountered: