-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize dropck #64595
Optimize dropck #64595
Conversation
r? @eddyb (rust_highfive has picked a reviewer for you, use r? to override) |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I spent some more time optimizing this and I have another commit locally (will wait on perf for first commit before pushing it) that further optimizes drop check in a pathological case to the point where drop check becomes much faster (i.e., completes to overflow). However, that doesn't fully solve the problem as we then hit a hang when trying to print the types (somewhat unsurprisingly, these types are huge), I haven't quite figured out a good fix there. This leads me to conclude that @pnkfelix's remarks on Zulip when discussing this (cc #4287) where apt -- we should try to come up with a check that detects this case more knowingly and errors out before we try to recurse down the chain of instantiations so we can provide better spans and such. I've not yet thought of a good way to do so, but am still thinking. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Finished benchmarking try commit c1c16e012812695bb3f7f84ad6c6ba53b1eeccbe, comparison URL. |
Hm, so instruction counts and such look like basically a wash -- this is somewhat expected, as we've not changed the algorithm here for most types -- however, it does show marked improvements for two test cases I examined. Both of these now complete dropck (into overflow) rather than being so slow in dropck that we don't reach overflow in reasonable time. Neither one actually completes in reasonable time due to this print which takes forever on such large types, but this PR still seems like a good thing to land (in particular, because it's not really hurting readability all that much IMO). So I'm going to r? @pnkfelix here but feel free to reassign (we're not changing algorithm here at all so really anyone can review). enum Perfect<T> {
Tip(T),
Fork(Box<Perfect<(T, T)>>),
}
fn main() {
let _ = Perfect::Tip(Box::new(42));
} enum Perfect<T> {
Tip(T),
Fork(Box<Perfect<(T, T)>>),
}
fn main() {
let _ = Perfect::Tip(42);
} |
Finished benchmarking try commit c1c16e012812695bb3f7f84ad6c6ba53b1eeccbe, comparison URL. |
result.kinds.extend(constraints.outlives.drain(..)); | ||
result.overflows.extend(constraints.overflows.drain(..)); | ||
|
||
// At some point we just need to stop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't fully brought all this back into my mental cache; can you say more here on why >= 1
is the right threshold?
If all you want to do is stop as soon as there is any overflow (which is how I interpret this), then fine. But the comment as written makes it sound like the threshold is a more interesting value, like 100 or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Anyway apart from that, I think these changes look fine.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conditional is basically because with the new code we detect thousands (and then millions.. and so on) of overflows, eventually leading to an OOM on the Perfect
enum; I initially thought maybe we should have some small number, like 10, here -- but then in thinking more, I had decided that chances are, the first overflow contains all the information you need, and there's no reason for us to spin further.
I can update this comment with that summary?
5acbeac
to
21bdfb8
Compare
Alright, I've updated the comment with some explanation for why we're comparing to 1 and not something larger -- I'm not sure if @pnkfelix meant #64595 (comment) as a r=me so I'll not approve this myself. |
The job Click to expand the log.
I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact |
21bdfb8
to
e2c80c5
Compare
Looks like CI broke due to changes on master, rebased -- should be fixed now. |
☔ The latest upstream changes (presumably #64864) made this pull request unmergeable. Please resolve the merge conflicts. |
Ping from triage. |
Pinging again from triage. |
This allows caching some recursive types and getting to an error much more quickly.
Previously we'd frequently throw away vectors which is bad for performance
e2c80c5
to
8de7fd8
Compare
@bors r+ |
📌 Commit 8de7fd8 has been approved by |
Optimize dropck This does two things: caches the `trivial_dropck` check by making it a query, and shifts around the implementation of the primary dropck itself to avoid allocating many small vectors.
☀️ Test successful - checks-azure |
This does two things: caches the
trivial_dropck
check by making it a query, and shifts around the implementation of the primary dropck itself to avoid allocating many small vectors.