-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[ty] Better handling of "derived information" in constraint sets #21463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Diagnostic diff on typing conformance testsNo changes detected when running ty on typing conformance tests ✅ |
|
2cf80e0 to
7733417
Compare
sharkdp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a full review yet, but I might not have enough time to review the rest today, so sending two initial questions.
| /// pruned from the search), and new constraints that we can assume to be true even if we haven't | ||
| /// seen them directly. | ||
| /// | ||
| /// We support several kinds of sequent: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious about the terminology here. The special thing about the equivalent term in logic seems to be that there can be multiple "consequents" on the right hand side. But none of the cases listed below seem to have that form?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The representation does allow for multiple consequents (there is a Vec in the single_implications and pair_implications maps). But we never construct one directly, since each sequent that we construct corresponds to an implication where we infer one fact from something else we already know. But when that single-consequent sequent is folded in to the rest of the SequentMap, it might get combined with other sequents that have the same lhs, producing a combined sequent with multiple consequents. (And the consequents of a sequent are ORed, so if there's multiple derived facts that we can infer from the same set of given facts, we can choose which of them we need to use.)
6fe0caa to
1b94bbd
Compare
CodSpeed Performance ReportMerging #21463 will not alter performanceComparing Summary
Footnotes
|
sharkdp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic in-code documentation and writeup. I have the feeling that I understood roughly what is going on and it seems to make a lot of sense to me, but I'll also admit that my understanding is most certainly not deep enough to provide any meaningful input on the overall design. But all of the individual pieces seem consistent to me.
We were previously normalizing the upper and lower bounds of each constraint when constructing constraint sets. Like in #21463, this was for conflated reasons: It made constraint set displays nicer, since we wouldn't render multiple constraints with obviously equivalent bounds. (Think `T ≤ A & B` and `T ≤ B & A`) But it was also useful for correctness, since prior to #21463 we were (trying to) add the full transitive closure to a constraint set's BDD, and normalization gave a useful reduction in the number of nodes in a typical BDD. Now that we don't store the transitive closure explicitly, that second reason is no longer relevant. Our sequent map can store that full transitive closure much more efficiently than the expanded BDD would have. This helps fix some false positives on #20933, where we're seeing some (incorrect, need to be fixed, but ideally not blocking this effort) assignability failures between a type and its normalization. Normalization is still useful for display purposes, and so we do normalize the upper/lower bounds before building up our display representation of a constraint set BDD. --------- Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
This saga began with a regression in how we handle constraint sets where a typevar is constrained by another typevar, which #21068 first added support for:
While working on #21414, I saw a regression in this test, which was strange, since that PR has nothing to do with this logic! The issue is that something in that PR made us instantiate the typevars
TandUin a different order, giving them differently ordered salsa IDs. And importantly, we use these salsa IDs to define the variable ordering that is used in our constraint set BDDs. This showed that our "mutually constrained" logic only worked for one of the two possible orderings. (We can — and now do — test this in a brute-force way by copy/pasting the test with both typevar orderings.)The underlying bug was in our
ConstraintSet::simplify_and_domainmethod. It would correctly detect(U ≤ T ≤ U) ∧ (U ≤ int), because those two constraints affect different typevars, and from that, inferT ≤ int. But it wouldn't detect the equivalent pattern in(T ≤ U ≤ T) ∧ (U ≤ int), since those constraints affect the same typevar. At first I tried adding that as yet more pattern-match logic in the ever-growingsimplify_and_domainmethod. But doing so caused other tests to start failing.At that point, I realized that
simplify_and_domainhad gotten to the point where it was trying to do too much, and for conflicting consumers. It was first written as part of our display logic, where the goal is to remove redundant information from a BDD to make its string rendering simpler. But we also started using it to add "derived facts" to a BDD. A derived fact is a constraint that doesn't appear in the BDD directly, but which we can still infer to be true. Our failing test relies on derived facts — being able to infer thatT ≤ inteven though that particular constraint doesn't appear in the original BDD. Before,simplify_and_domainwould trace through all of the constraints in a BDD, figure out the full set of derived facts, and add those derived facts to the BDD structure. This is brittle, because those derived facts are not universally true! In our example,T ≤ intonly holds along the BDD paths where bothT = UandU ≤ int. Other paths will test the negations of those constraints, and on those, we shouldn't inferT ≤ int. In theory it's possible (and we were trying) to use BDD operators to express that dependency...but that runs afoul of how we were simultaneously trying to remove information to make our displays simpler.So, I ripped off the band-aid.
simplify_and_domainis now only used for display purposes. I have not touched it at all, except to remove some logic that is definitely not used by ourDisplayimpl. Otherwise, I did not want to touch that house of cards for now, since the display logic is not load-bearing for any type inference logic.For all non-display callers, we have a new sequent map data type, which tracks exactly the same derived information. But it does so (a) without trying to remove anything from the BDD, and (b) lazily, without updating the BDD structure.
So the end result is that all of the tests (including the new regressions) pass, via a more efficient (and hopefully better structured/documented) implementation, at the cost of hanging onto a pile of display-related tech debt that we'll want to clean up at some point.