-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integrate anon dep nodes into trait selection #43184
Merged
bors
merged 1 commit into
rust-lang:master
from
nikomatsakis:incr-comp-anonymize-trait-selection
Jul 15, 2017
Merged
integrate anon dep nodes into trait selection #43184
bors
merged 1 commit into
rust-lang:master
from
nikomatsakis:incr-comp-anonymize-trait-selection
Jul 15, 2017
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
aidanhs
added
the
S-waiting-on-review
Status: Awaiting review from the assignee but also interested parties.
label
Jul 13, 2017
@bors r+ Excellent! This should bring us a lot closer to red-green evaluation. |
📌 Commit 4f030d0 has been approved by |
bors
added a commit
that referenced
this pull request
Jul 15, 2017
…on, r=michaelwoerister integrate anon dep nodes into trait selection Use anonymous nodes for trait selection. In all cases, we use the same basic "memoization" strategy: - Store the `DepNodeIndex` in the slot along with value. - If value is present, return it, and add a read of the dep-node-index. - Else, start an anonymous task, and store resulting node. We apply this strategy to a number of caches in trait selection: - The "trans" caches of selection and projection - The "evaluation" cache - The "candidate selection" cache In general, for our cache strategy to be "dep-correct", the computation of the value is permitted to rely on the *value in the key* but nothing else. The basic argument is this: in order to look something up, you have to produce the key, and to do that you must have whatever reads were needed to create the key. Then, you get whatever reads were further needed to produce the value. But if the "closure" that produced the value made use of *other* environmental data, not derivable from the key, that would be bad -- but that would **also** suggest that the cache is messed up (though it's not proof). The structure of these caches do not obviously prove that the correctness criteria are met, and I aim to address that in further refactorings. But I *believe* it to be the case that, if we assume that the existing caches are correct, there are also no dependency failures (in other words, if there's a bug, it's a pre-existing one). Specifically: - The trans caches: these take as input just a `tcx`, which is "by definition" not leaky, the `trait-ref` etc, which is part of the key, and sometimes a span (doesn't influence the result). So they seem fine. - The evaluation cache: - This computation takes as input the "stack" and has access to the infcx. - The infcx is a problem -- would be better to take the tcx -- and this is exactly one of the things I plan to improve in later PRs. Let's ignore it for now. =) - The stack itself is also not great, in that the *key* only consists of the top-entry in the stack. - However, the stack can cause a problem in two ways: - overflow (we panic) - cycle check fails (we do not update the cache, I believe) - The candidate selection cache: - as before, takes the "stack" and has access to the infcx. - here it is not as obvious that we avoid caching stack-dependent computations. However, to the extent that we do, this is a pre-existing bug, in that we are making cache entries we shouldn't. - I aim to resolve this by -- following the chalk-style of evaluation -- merging candidate selection and evaluation. - The infcx is a problem -- would be better to take the tcx -- and this is exactly one of the things I plan to improve in later PRs. Let's ignore it for now. =) - The stack itself is also not great, in that the *key* only consists of the top-entry in the stack. - Moreover, the stack would generally just introduce ambiguities and errors anyhow, so that lessens the risk. Anyway, the existing approach to handle dependencies in the trait code carries the same risks or worse, so this seems like a strict improvement! r? @michaelwoerister cc @arielb1
☀️ Test successful - status-appveyor, status-travis |
This decreased the memory usage of two benchmarks by 8% and improved compile time of one benchmark by up to 40%. Nice! 🎉 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Use anonymous nodes for trait selection. In all cases, we use the same basic "memoization" strategy:
DepNodeIndex
in the slot along with value.We apply this strategy to a number of caches in trait selection:
In general, for our cache strategy to be "dep-correct", the computation of the value is permitted to rely on the value in the key but nothing else. The basic argument is this: in order to look something up, you have to produce the key, and to do that you must have whatever reads were needed to create the key. Then, you get whatever reads were further needed to produce the value. But if the "closure" that produced the value made use of other environmental data, not derivable from the key, that would be bad -- but that would also suggest that the cache is messed up (though it's not proof).
The structure of these caches do not obviously prove that the correctness criteria are met, and I aim to address that in further refactorings. But I believe it to be the case that, if we assume that the existing caches are correct, there are also no dependency failures (in other words, if there's a bug, it's a pre-existing one). Specifically:
tcx
, which is "by definition" not leaky, thetrait-ref
etc, which is part of the key, and sometimes a span (doesn't influence the result). So they seem fine.Anyway, the existing approach to handle dependencies in the trait code carries the same risks or worse, so this seems like a strict improvement!
r? @michaelwoerister
cc @arielb1