-
Notifications
You must be signed in to change notification settings - Fork 1.7k
consider benchmarking the lints #57574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The linter has a --stats flag that will provide timing information for each of the lints run over whatever code base is being linted. That should be a reasonable start for automating something. |
Sounds great. As for the stats flag, Brian is right it gives us a nice jumping off point. For context, here are the salient details running all the lints against the linter codebase:
@devoncarew : let's set aside some time later this week to brainstorm. Looking forward! 👍 |
Ah, fantastic! I didn't realize. Seems like much of the work is done already. |
Picking up from dart-archive/linter#671, the next steps are to:
To that end, I'd like to consider a new flag ( @devoncarew @bwilkerson for thoughts. |
Multiple runs seems interesting, but keep in mind that the travis environment is virtualized - we don't have great control over how fast it is run to run. I think w/ this benchmarking solution we're just trying to get rough relative sizes of things. |
The trouble with the current approach is that we're seeing wild variation when we run locally. (For example swings from ~75 to ~150ms for cc @alexeieleusis : dart-archive/linter#672 should be interesting to you 👍 |
We're live with basic benchmarking and one perf bug opened (#57575) 🤘 There are lots of possible next steps (better reporting, a more representative code base) but my gut says the first bit is done and we can close this in favor of follow-ups. |
sounds reasonable to me. In terms of filing follow up issues, running it over a large representative codebase seems like a good one. |
We've had issues where individual lints can have outsized performance impact on the time for analysis. We should set up a benchmarking system, so we can get a rough sense of which lints are performant, slow, or have severe perf issues.
@pq
I have worked on some similar systems in the past - we may be able to share some code or concepts here.
The text was updated successfully, but these errors were encountered: