-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor diverging and numeric fallback. #46714
Refactor diverging and numeric fallback. #46714
Conversation
(rust_highfive has picked a reviewer for you, use r? to override) |
Hmm. I have some concerns about this approach. Let me start with the opening sentence:
The current place where we apply inference is not really arbitrary -- it's intended to be essentially the last possible point. In other words, we want to give the maximal chance for the user's program to add constraints. The long term vision for how type-check should work is that we want to make it less imperative. Let me give an example. Right now, if you do let value = None;
loop {
if value.is_some() {
// the type of `value.unwrap()` is not yet known
value.unwrap().bar();
} else {
// this would tell us that the type of `value.unwrap()` is `char`,
// but we never get this far
value = Some('a');
}
} Currently, this code will error because, at the time of type-checking, the type of value is What I would like to eventually do in this sort of scenario is not to report an error, but rather to file a pending obligation. Basically we could defer type checking the method call to In contrast, this PR would make it so that Now, it's true that we already do a certain amount of that -- particularly around coercions, but also as a side effect of trait matching, and I think we can find ways to go forward with the "defer" notion and still incorporate those side effects, but I am wary of introducing more such cases to accommodate. |
@leodasvacas I was thinking more about this and #46206 (and I do apologize I haven't given you any feedback there yet) -- would you be interested in maybe scheduling a time to chat about this? (e.g., via some video chat or just IRC) I'd like to kind of brainstorm a bit and see if we can bottom out the design space a bit. I'm also curious if you're interested in helping so some related but different refactorings -- for example, making type inference less 'imperative' and prone to reporting errors, as I just described above. |
Thank you for your quick and illuminating review. I believe you wrote your comment with user fallback in mind, and how running user fallback in The ideal solution is to get rid of However, getting back to this PR, I see it as just a refactoring. The moment where we currently run integer and diverging fallback is correct, but it's not the last possible correct moment. This PR seeks to run it in the last possible moment, for each variable that needs it. In the future if we call |
7a17a66
to
be52d63
Compare
I don't see it quite this way. In particular, once code starts to compile (instead of erroring), we are (more) committed to a certain course of action. It may well be that, if we fixed the call to |
be52d63
to
edae6b5
Compare
@nikomatsakis You're right. This PR currently:
let x = 512;
x as u8; We must infer It would be prudent to crater, to see if there are other unintended effects of the current fallback that this accidentally "fixes". |
ping @nikomatsakis, this may be ready for another look? |
I think @leodasvacas still need to schedule a chat. Sorry that's been difficult. I'm back from vacation now, but I'll be traveling a bit next week -- something like wed, thu, or fri could probably work. (@leodasvacas -- I forget, were we chatting before on gitter? e-mail?) |
Marking as blocked pending a meeting with @nikomatsakis. |
We've met and @nikomatsakis is taking another look considering this is supposed make no more and no less code compile, as it should be just a refactoring. This is now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks nice, I agree, I didn't have time to fully grok why some of the behavior changed, though. Maybe you can walk me through it a bit? Thanks :)
src/librustc_typeck/check/mod.rs
Outdated
_ if self.is_tainted_by_errors() => self.tcx().types.err, | ||
UnconstrainedInt => self.tcx.types.i32, | ||
UnconstrainedFloat => self.tcx.types.f64, | ||
Neither if self.type_var_diverges(ty) && fallback == Fallback::Full |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I'd rather see something like this
Neither if self.type_var_diverges(ty) => match fallback {
Fallback::Full => ...
Fallback::Whatever => ...
}
Neither => return
the reason is that this way I know what the variants are for fallback without having to check, so it's easier for me to reason about whether this is doing the right thing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
try!(closure(|| bar(0 as *mut _))); //~ ERROR cannot find function `bar` in this scope | ||
try!(closure(|| bar(0 as *mut _))); | ||
//~^ ERROR cannot find function `bar` in this scope | ||
//~^^ ERROR cannot cast to a pointer of an unknown kind |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, clearly this isn't a pure refactoring -- at least the error messages changed. This new error doesn't appear particularly helpful -- I wonder if we should find a way to suppress it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error suppression regressed here, don't know what's going on. An is_tainted_by_errors()
check before emitting cast errors should fix this, not sure if it's a good fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems like a reasonable thing to do, to me, or at least around cases involve "unknown" things like type variables.
@@ -7,6 +7,7 @@ | |||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your | |||
// option. This file may not be copied, modified, or distributed | |||
// except according to those terms. | |||
// compile-flags: --error-format=human |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this test was renamed, but what happened to the old .stderr
file?
= note: required by `main::assert_sync` | ||
|
||
error[E0277]: the trait bound `std::cell::Cell<{integer}>: std::marker::Sync` is not satisfied | ||
--> $DIR/not-send-sync.rs:26:5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you walk me through why the error behavior changed in this example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i32
was replaced with {integer}
because previously we did fallback before analyze_closure()
, now we do fallback after analyze_closure()
so any errors in closures taint the context and prevent fallback.
Don't know why the order of the two errors was swapped, or why there is a place where _
was replaced with ((),)
.
I'm now checking for errors before emitting "unknown cast" errors. This fixes the test that regressed. The issue |
☔ The latest upstream changes (presumably #47528) made this pull request unmergeable. Please resolve the merge conflicts. |
f536c3c
to
ae90c41
Compare
src/librustc_typeck/check/mod.rs
Outdated
fcx.select_obligations_where_possible(); | ||
fcx.closure_analyze(body); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pondering whether this is the right change.
I mean, the original intention of applying fallback where we did was that -- after this point -- we were not supposed to be introducing "new constraints" that might have influenced the type of the variables. Therefore, it makes sense to do fallback (which may in turn introduce new constraints, leading to some iteration).
Do you think we are introducing new constraints such that fallback is occurring too early? (Do we have an example of that?)
In general, though, I am on board for trying to restructure typeck to be as "lazy" as possible, so I am trying to decide whether this change is in fact doing that. It's not the way I had originally thought to go about things, which would be more a matter of identifying places where we invoke structurally_resolve_type
or similar helpers and changing them into deferrable obligations (which would then be ultimately resolved during select_obligations_where_possible
).
(Though a pre-req for doing that work is probably reworking the trait system to improve efficiency, another effort I'd like to have started yesterday.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More specifically, things like closure upvar inference (and casts) were meant to operate over the "fully inferred types", basically. That said, I think there are corner cases that make this something of a fiction. Have to bring that back into cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
identifying places where we invoke
structurally_resolve_type
or similar helpers and changing them into deferrable obligations
I understand this goal, in my simplistic bovine mind the ideal type checker goes like this:
- Introduce all lazy constraints.
- Solve as much as possible.
- Do fallback.
- Solve as much as possible.
This PR tries to make it blatantly obvious in the code that no other constraints are added between steps 3 and 4.
Therefore, it makes sense to do fallback (which may in turn introduce new constraints, leading to some iteration).
I guess we haven't found any cases where the "early" fallback was actually helping inference more than the "late" fallback does.
Do you think we are introducing new constraints such that fallback is occurring too early?
I gave the following example up-thread:
let x = 512;
x as u8;
Casting would hint x
to u8
, but fallback forces it to i32
, this is the only case I found where early fallback was needed for backwards compatibility.
I don't know if it's true that we don't introduce any new constraints in closure_analyze
, but it sounds like something difficult to reason about. With this PR we wouldn't have to reason about it anymore. That fact that this PR passes CI is empirical evidence that closure_analyze
isn't really relying on fallback, whether that's really true is beyond my knowledge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR tries to make it blatantly obvious in the code that no other constraints are added between steps 3 and 4.
OK, that makes sense to me.
I guess we haven't found any cases where the "early" fallback was actually helping inference more than the "late" fallback does.
Well, consider the upvar inference in particular. This code is tasked with figuring out whether a closure ought to be FnOnce
, FnMut
, or Fn
. It does this by looking at what sorts of things the closure does when it executes. For example, given this closure:
|| foo(x)
if the type of x
is Vec<String>
, that's a move of x
, and hence the closure must be FnOnce
. But if the type is u32
, then this is a copy., and the closure can be simply Fn
.
So what are we to do if the type is not yet inferred and we are later going to come and apply defaults? We can't know yet whether that is a move or a copy.
I think the only thing we could do would be to defer the decision about what traits the closure implements until after defaults have been applied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on your explanation, it seems plausible that upvar inference would like to add constraints, consider:
fn fn_closure<F: Fn()>(f: F) {}
fn main() {
let x = None; // Something uninferred.
fn_closure(|| std::mem::drop(x));
}
It seems reasonable that upvar inference would add a Copy
bound to x
or perhaps a Move bound (though I don't see how those bounds could actually help inference).
It's also possible that defaulting would prevent upvar inference from working, in fn foo<T: FnOnce() = Fn()>(_: &T) {}
after defaulting T
is stuck as Fn()
when upvar inference could have inferred it to FnOnce()
.
So perhaps upvar inference isn't a special case and should be a lazy constraint, while it isn't it would use the usual mechanisms for eager type resolution such as structurally_resolve_type
. Whether we want defaulting in eager type resolution is something to be decided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Argh, sorry it took me a few days to respond. Just juggling a lot of reviews lately and this one takes deeper thought than most.
it seems plausible that upvar inference would like to add constraints, consider:
That is true, you could imagine it imposing the requirement, but it's now how it's defined today. There are actually two things involved here. First, we have a rule (which perhaps was not the wisest rule, but it's currently there) that based on the "expected type and where-clauses", we will select the closure kind -- so in this case, upvar inference actually doesn't get involved, we just pick the closure kind as Fn
from the get go. (If you're interested, I'd like to try and experiment where we back off from that rule and measure the impact.)
In that case, the error is detected during borrowck, which comes after typeck and therefore can assume all types are fully known. Note that in general the requirement for things to be Copy
(or else be moved) is flow dependent and is ultimately enforced on MIR, and we require all types to be built in order to build MIR.
But if you wrote the example as:
let x = || drop(x);
fn_closure(x);
then the expected type is not involved, and indeed the closure would be selected as FnOnce
, because it needs to own its content. This would result in an error as well, but later on, when invoking fn_closure
(since Fn
would not be satisfied).
It's also possible that defaulting would prevent upvar inference from working
Actually, upvar inference is supposed to be independent of the demands that are placed on the closure (modulo that bit about the "expected type"). That is, we select the most permissive trait that the closure could possibly implement, and that is weighed against what it required to implement by others.
In any case, in your example, the type variable T
would be inferred to the closure's unique type, so the default (of dyn Fn()
) would never be used.
So perhaps upvar inference isn't a special case and should be a lazy constraint
Maybe, but I'm not convinced yet. =) Keep in mind that it's always something we could change later.
☔ The latest upstream changes (presumably #45337) made this pull request unmergeable. Please resolve the merge conflicts. |
the `or_else` part was dead code.
This refactoring tries to make numeric fallback easier to reason about. Instead of applying all fallbacks at an arbitrary point in the middle of inference, we apply the fallback only when necessary and only for the variable that requires it, which for numeric fallback turns out to be just casts. The only visible consequence seems to be some error messages where instead of getting `i32` we get `{integer}` because we are less eager about fallback. The bigger goal is to make it easier to integrate user fallbacks into inference, if we ever figure that out.
It had only one caller.
@nikomatsakis this PR needs a review! |
@bors r+ |
📌 Commit d49d428 has been approved by |
⌛ Testing commit d49d428 with merge 9d1980c1129d8b00124d8f9e5f47e69930f93539... |
💔 Test failed - status-travis |
⌛ Testing commit d49d428 with merge dab83c7428bb80ba9f4fe83f57ce304b5b9baeba... |
💔 Test failed - status-travis |
@bors retry Spuriously canceled? I can't find any failures. |
@kennytm some builds were manually cancelled to allow jobs related to the release to run. |
…, r=nikomatsakis Refactor diverging and numeric fallback. This refactoring tries to make numeric fallback easier to reason about. Instead of applying all fallbacks at an arbitrary point in the middle of inference, we apply the fallback only when necessary and only for the variable that requires it. The only place that requires early fallback is the target of numeric casts. The visible consequences is that some error messages that got `i32` now get `{integer}` because we are less eager about fallback. The bigger goal is to make it easier to integrate user fallbacks into inference, if we ever figure that out.
☀️ Test successful - status-appveyor, status-travis |
This refactoring tries to make numeric fallback easier to reason about. Instead of applying all fallbacks at an arbitrary point in the middle of inference, we apply the fallback only when necessary and only for
the variable that requires it. The only place that requires early fallback is the target of numeric casts.
The visible consequences is that some error messages that got
i32
now get{integer}
because we are less eager about fallback.The bigger goal is to make it easier to integrate user fallbacks into inference, if we ever figure that out.