-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Do not run per-module late lints if they can be all skipped #139597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
@bors try @rust-timer queue This won't actually have any improvements in rustc-perf, I want to get a compiler build, and see if there are any regressions for the new check. |
This comment has been minimized.
This comment has been minimized.
Do not run per-module lints if they can be all skipped We run ~70 late lints for all dependencies even if they use `--cap-lints=allow`, which seems wasteful. It looks like these lints are super fast, but still. r? `@ghost`
Actually, locally it looks like the early lints are much more expensive, even though there are less of them. It saved 3% of time locally while building dependencies (modulo parallelism though). |
Calling |
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (0d5b1b6): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (primary -2.5%, secondary 1.4%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesThis benchmark run did not return any relevant results for this metric. Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 780.203s -> 781.223s (0.13%) |
@rustbot ready r? compiler |
// at all. This happens often for dependencies built with `--cap-lints=allow`. | ||
let dont_need_to_run = tcx.lints_that_dont_need_to_run(()); | ||
let can_skip_lints = | ||
builtin_lints.get_lints().iter().all(|l| dont_need_to_run.contains(&LintId::of(l))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
builtin_lints.get_lints().iter().all(|l| dont_need_to_run.contains(&LintId::of(l))); | |
builtin_lints.get_lints().iter().all(|lint| dont_need_to_run.contains(&LintId::of(lint))); |
i misread l
as 1
and was like wow its kinda weird to hardcode LintId::of(1)
lmao
builtin_lints.get_lints().iter().all(|l| dont_need_to_run.contains(&LintId::of(l))); | ||
if !can_skip_lints { | ||
late_lint_mod_inner(tcx, module_def_id, context, builtin_lints); | ||
} | ||
} else { | ||
let builtin_lints = Box::new(builtin_lints) as Box<dyn LateLintPass<'tcx>>; | ||
let mut binding = store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not do the same thing here that we do in late_lint_crate
and filter out passes that only contain stuff that doesn't need to be run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do that, but I think that would be slower. When we run everything, it is built in the compile-time.prepared structure, which merged all lints together. If we filtered individual lints, they need to be built using a runtime-prepared structure. For the Clippy lints that makes sense, but here I essentially only want to distinguish between "everything is disabled due to --cap-lints=allow" and "something might be disabled, but let's still run in the merged mode".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i meant in the else
branch (but github doesnt let me comment on the right line RuntimeCombinedLateLintPass
but don't do any filtering out of dyn LateLintPass
s that are just only allowed lints. e.g.
let mut binding = store
.late_module_passes
.iter()
.map(|mk_pass| (mk_pass)(tcx))
.chain(std::iter::once(builtin_lints))
.filter(|pass| { ... }) // new, filter out passes that only contain allowed lints
.collect::<Vec<_>>();
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But builtin_lints
is a single LintPass
that combines all the builtin late lints together. In other words, in its e.g. check_mod
implementation, it calls:
LINT_1.check_mod();
LINT_2.check_mod();
etc. So we cannot filter these lints from the outside. In theory, we could filter them inside these lint visitor functions, but that would mean doing that check for every call of each visitor function times the number of lints we have, which... doesn't sound fast 😅
We run ~70 late lints for all dependencies even if they use
--cap-lints=allow
, which seems wasteful. It looks like these lints are super fast (unlike early lints), but still.r? @ghost