-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Typechecking pass takes an extreme amount of time on trivial programs #18208
Comments
|
|
This might help, too. The number in front of the lines is how often the line appeared in the output.
|
This seems to be a regression caused by #17197 interacting badly with the Sized builtin bound. Specifically, I think that this change causes O(n^2) behavior here: https://github.com/rust-lang/rust/pull/17197/files#diff-31c6a728eb925b9f6b0124e93948d270L2372 Given this: fn some() -> int {
0
}
fn main() {
let _a = some();
let _a = some();
let _a = some();
} Ignoring the final Unfortunately I lack the knowledge to properly fix this. |
Over the past few days we've seen a 1hr regression in cycle time for the |
Nominating because this looks bad and to make sure it's not a limitation of the type checking algo. |
I'll take a look. |
(Incidentally, I suspect the issues with #18121 are orthogonal to the N^2 issue. I was concerned about the latter but it didn't seem to be causing trouble in practice -- it should be easy enough, though, to hold off on "reattempting" to resolve pending obligations. And it may also be worth special-casing the Sized bound checks, since they in particular are so frequent.) |
… only when we are about to report an error (rust-lang#18208). I found it is still important to consider the full set in order to make tests like `let x: Vec<_> = obligations.iter().collect()` work.
Avoid O(n^2) performance by reconsidering the full set of obligations only when we are about to report an error (#18208). I found it is still important to consider the full set in order to make tests like `let x: Vec<_> = obligations.iter().collect()` work. I think we lack the infrastructure to write a regression test for this, but when I did manual testing I found a massive reduction in type-checking time for extreme examples like those found in #18208 vs stage0. f? @dotdash
…arameters too when creating xform-self-type. Fixes rust-lang#18208.
this specific issue has been addressed. @nikomatsakis may open up another issue to track related work on improving compile-time for e.g. method resolution, but that need not block closing this issue. Therefore, closing. |
…arameters too when creating xform-self-type. Fixes rust-lang#18208.
…2, r=nrc This is a pretty major refactoring of the method dispatch infrastructure. It is intended to avoid gross inefficiencies and enable caching and other optimizations (e.g. #17995), though it itself doesn't seem to execute particularly faster yet. It also solves some cases where we were failing to resolve methods that we theoretically should have succeeded with. Fixes #18674. cc #18208
…arameters too when creating xform-self-type. Fixes rust-lang#18208.
…arameters too when creating xform-self-type. Fixes rust-lang#18208.
…2, r=nrc This is a pretty major refactoring of the method dispatch infrastructure. It is intended to avoid gross inefficiencies and enable caching and other optimizations (e.g. #17995), though it itself doesn't seem to execute particularly faster yet. It also solves some cases where we were failing to resolve methods that we theoretically should have succeeded with. Fixes #18674. cc #18208
stalled rather than keeping this annoying mark; I checked that the original compile-time regression that the mark was intended to fix (rust-lang#18208) was still reasonable, but I've not done exhaustive measurements to see how important this "optimization" really is anymore
stalled rather than keeping this annoying mark; I checked that the original compile-time regression that the mark was intended to fix (rust-lang#18208) was still reasonable, but I've not done exhaustive measurements to see how important this "optimization" really is anymore
stalled rather than keeping this annoying mark; I checked that the original compile-time regression that the mark was intended to fix (rust-lang#18208) was still reasonable, but I've not done exhaustive measurements to see how important this "optimization" really is anymore
…ukqt, r=lnicola internal: allow overriding proc macro server in analysis-stats Needed this argument in order to profile the proc macro expansion server (c.f., [this Zulip conversation](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/.60macro_rules!.60-based.20macros.20for.20derives.2Fattributes/near/473466794)). I also took the opportunity to change the phrasing for `--proc-macro-srv`. Ran with `samply record ~/.cargo/bin/rust-analyzer analysis-stats --proc-macro-srv /Users/dbarsky/.cargo/bin/rust-analyzer-proc-macro-srv --parallel .` on rust-analyzer itself.
Take this simple program and modify it by adding more consecutive
println!("Hello world");
lines. Compile it with-Z time-passes
and observe how long typechecking takes. I've compiled a table below.println!
Here's the full output of
time-passes
for the 4096 case, to see the enormous disparity between typechecking and the other passes (though liveness is another offender):This seems to contradict our usual excuse for compiler slowness of "it's not our fault, it's LLVM".
The text was updated successfully, but these errors were encountered: