-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[red-knot] mypy_primer: larger depot runner #17547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
86cc326 to
13885d4
Compare
|
I wonder if we should experiment with passing |
Certainly worth doing an experiment, but note that we profit massively from large concurrency during the IO-heavy setup stage (cloning + dependency installation). That would suggest that we should rather limit the number of cores for our red knot runs. Or just hope that the OS scheduler does a good job. |
|
Right, we could experiment with setting |
AlexWaygood
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems fine to me, but I don't know how much upgrading to the larger runner might cost us financially (@zanieb?)
I looked at this table which shows that the -32 runners have a 16x minutes-multiplier, compared to the 8x multiplier for the -16 runners. So it looks to me like they're twice as expensive, but we only use them for ~75% of the time. So probably 50% more expensive overall. Edit: I'm merging this with the assumption that this is not a significant raise in CI costs, but let me know if this is a concern, and we can simply roll it back. |
|
I wouldn't want to drop |
|
In general, I'm supportive of biasing towards spending money to speed up CI :) I'll keep an eye on the total spend. |
I experimented with this change to mypy_primer locally to see if it sped up running mypy_primer with red-knot at all (the idea is to keep mypy_primer's concurrency for the setup stage but limit it to one subprocess actually running red-knot at any one time, since red-knot internally uses concurrency anyway): --- a/mypy_primer/model.py
+++ b/mypy_primer/model.py
@@ -319,6 +319,7 @@ class Project:
check=False,
cwd=ctx.get().projects_dir / self.name,
env=env,
+ use_checker_lock=True,
)
if ctx.get().debug:
debug_print(f"{Style.BLUE}{knot} on {self.name} took {runtime:.2f}s{Style.RESET}")
diff --git a/mypy_primer/utils.py b/mypy_primer/utils.py
index 8d2aacd..cb60595 100644
--- a/mypy_primer/utils.py
+++ b/mypy_primer/utils.py
@@ -67,6 +67,7 @@ def debug_print(obj: Any) -> None:
_semaphore: asyncio.Semaphore | None = None
+_checker_lock = asyncio.Lock()
async def run(
@@ -75,6 +76,7 @@ async def run(
shell: bool = False,
output: bool = False,
check: bool = True,
+ use_checker_lock: bool = False,
**kwargs: Any,
) -> tuple[subprocess.CompletedProcess[str], float]:
if output:
@@ -87,7 +89,7 @@ async def run(
global _semaphore
if _semaphore is None:
_semaphore = asyncio.BoundedSemaphore(ctx.get().concurrency)
- async with _semaphore:
+ async with _checker_lock if use_checker_lock else _semaphore:
if ctx.get().debug:
log = cmd if shell else shlex.join(cmd)
log = f"{Style.BLUE}{log}"Unfortunately it seems like it just slows mypy_primer down rather than speeding it up at all! |
Summary
A switch from 16 to 32 cores reduces the
mypy_primerCI time from 3.5-4 min to 2.5-3 min. There's also a 64-core runner, but the 4 min -> 3 min change when doubling the cores once does suggest that it doesn't parallelize this well.