-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mypy_primer: add to CI #9686
mypy_primer: add to CI #9686
Conversation
This will run mypy_primer on mypy PRs. Changes on the open source corpus are reported as comments on the PR. We integrated this into typeshed CI; you can see examples here: python/typeshed#3183 python/typeshed#4734 This might be a little slow. On typeshed this runs in 10 minutes, but it's using a mypyc compiled wheel. It looks like it takes three minutes to compile a MYPYC_OPT_LEVEL=0 wheel in our CI currently (for a ~2x speedup), so that's probably worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea.
We could also run on a subset of projects (sympy in particular takes a long time).
I vote for this. We should selectively pick the subset so that they can be both representative and fast, a pull request CI is already taking too much time IMO.
Good stuff. This has the potential to find many regressions more early. Since I'd like to get mypy CI to run in ~10 minutes eventually (for simple PRs), it would be great if mypy_primer can be tuned to complete in roughly this amount of time. One option would be to split it into two jobs, each doing half the work. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! Regression tests are always a good idea. Agreed splitting it up in half seems like a good option. Another option is to only run on merge to master or less often otherwise.
It now compiles mypys with Sharding is probably the easiest way to get to ten minutes. Another idea I had was using ccache to make it feasible to compile with higher optimisation levels. I'd imagine I'd need to at least use MYPYC_MULTI_FILE to get that to work; are mypyc builds reproducible? |
MYPYC_MULTI_FILE is pretty experimental. If we can get it to work, I don't see any blockers for using ccache (but that doesn't mean that there won't be any). Higher optimization levels might still make compilation quite slow when a common dependency is changed, but it could well be a net positive. |
Let's try this out. If/when we start improving CI runtimes, we may want to split this into two jobs. |
This will run mypy_primer on mypy PRs. Changes on the open source
corpus are reported as comments on the PR.
We integrated this into typeshed CI; you can see examples here:
python/typeshed#3183
python/typeshed#4734
This might be a little slow. On typeshed this runs in 10 minutes, but
it's using a mypyc compiled wheel. It looks like it takes three minutes
to compile a MYPYC_OPT_LEVEL=0 wheel in our CI currently (for a ~2x
speedup in unit tests), so that's probably worth it. We could also run on
a subset of projects (sympy in particular takes a long time).