Skip to content

Do not run per-module late lints if they can be all skipped #139597

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Kobzol
Copy link
Contributor

@Kobzol Kobzol commented Apr 9, 2025

We run ~70 late lints for all dependencies even if they use --cap-lints=allow, which seems wasteful. It looks like these lints are super fast (unlike early lints), but still.

r? @ghost

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Apr 9, 2025
@Kobzol
Copy link
Contributor Author

Kobzol commented Apr 9, 2025

@bors try @rust-timer queue

This won't actually have any improvements in rustc-perf, I want to get a compiler build, and see if there are any regressions for the new check.

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 9, 2025
bors added a commit to rust-lang-ci/rust that referenced this pull request Apr 9, 2025
Do not run per-module lints if they can be all skipped

We run ~70 late lints for all dependencies even if they use `--cap-lints=allow`, which seems wasteful. It looks like these lints are super fast, but still.

r? `@ghost`
@bors
Copy link
Collaborator

bors commented Apr 9, 2025

⌛ Trying commit 3c4fb44 with merge 0d5b1b6...

@Kobzol
Copy link
Contributor Author

Kobzol commented Apr 9, 2025

Actually, locally it looks like the early lints are much more expensive, even though there are less of them. It saved 3% of time locally while building dependencies (modulo parallelism though).

@Kobzol
Copy link
Contributor Author

Kobzol commented Apr 9, 2025

Calling tcx.lints_that_dont_need_to_run in check_ast_node seems to introduce some hang, CPU at 100% :/

@bors
Copy link
Collaborator

bors commented Apr 9, 2025

☀️ Try build successful - checks-actions
Build commit: 0d5b1b6 (0d5b1b628a87bacb4d18bdd2a01048efdf15d319)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (0d5b1b6): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
0.3% [0.2%, 0.4%] 2
Improvements ✅
(primary)
-0.4% [-0.6%, -0.2%] 6
Improvements ✅
(secondary)
-0.8% [-1.6%, -0.2%] 8
All ❌✅ (primary) -0.4% [-0.6%, -0.2%] 6

Max RSS (memory usage)

Results (primary -2.5%, secondary 1.4%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
1.4% [1.4%, 1.4%] 1
Improvements ✅
(primary)
-2.5% [-2.5%, -2.5%] 1
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) -2.5% [-2.5%, -2.5%] 1

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 780.203s -> 781.223s (0.13%)
Artifact size: 366.14 MiB -> 365.99 MiB (-0.04%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Apr 10, 2025
@Kobzol Kobzol marked this pull request as ready for review April 10, 2025 21:02
@Kobzol Kobzol changed the title Do not run per-module lints if they can be all skipped Do not run per-module late lints if they can be all skipped Apr 10, 2025
@Kobzol
Copy link
Contributor Author

Kobzol commented Apr 10, 2025

@rustbot ready

r? compiler

// at all. This happens often for dependencies built with `--cap-lints=allow`.
let dont_need_to_run = tcx.lints_that_dont_need_to_run(());
let can_skip_lints =
builtin_lints.get_lints().iter().all(|l| dont_need_to_run.contains(&LintId::of(l)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
builtin_lints.get_lints().iter().all(|l| dont_need_to_run.contains(&LintId::of(l)));
builtin_lints.get_lints().iter().all(|lint| dont_need_to_run.contains(&LintId::of(lint)));

i misread l as 1 and was like wow its kinda weird to hardcode LintId::of(1) lmao

builtin_lints.get_lints().iter().all(|l| dont_need_to_run.contains(&LintId::of(l)));
if !can_skip_lints {
late_lint_mod_inner(tcx, module_def_id, context, builtin_lints);
}
} else {
let builtin_lints = Box::new(builtin_lints) as Box<dyn LateLintPass<'tcx>>;
let mut binding = store
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not do the same thing here that we do in late_lint_crate and filter out passes that only contain stuff that doesn't need to be run?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that, but I think that would be slower. When we run everything, it is built in the compile-time.prepared structure, which merged all lints together. If we filtered individual lints, they need to be built using a runtime-prepared structure. For the Clippy lints that makes sense, but here I essentially only want to distinguish between "everything is disabled due to --cap-lints=allow" and "something might be disabled, but let's still run in the merged mode".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i meant in the else branch (but github doesnt let me comment on the right line ☹️) we make a RuntimeCombinedLateLintPass but don't do any filtering out of dyn LateLintPasss that are just only allowed lints. e.g.

        let mut binding = store
            .late_module_passes
            .iter()
            .map(|mk_pass| (mk_pass)(tcx))
            .chain(std::iter::once(builtin_lints))
            .filter(|pass| { ... }) // new, filter out passes that only contain allowed lints
            .collect::<Vec<_>>();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But builtin_lints is a single LintPass that combines all the builtin late lints together. In other words, in its e.g. check_mod implementation, it calls:

LINT_1.check_mod();
LINT_2.check_mod();

etc. So we cannot filter these lints from the outside. In theory, we could filter them inside these lint visitor functions, but that would mean doing that check for every call of each visitor function times the number of lints we have, which... doesn't sound fast 😅

@BoxyUwU BoxyUwU added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Apr 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants