-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change inference #799
Change inference #799
Conversation
Change algorithm that determines whether type variables are minimized or maximized. We used to look only at the variance type variable in the containing type. We now also look with higher precedence at the direction from which the type variable was constrained. This is closer to what scalac does.
(scalac and dotty both produce an error here)
We now also consider type variables in a selection prefix of the application. The test case was augmented to include a snippet which only succeeds under the generalization.
Can we get this reviewed please? it's been lingering for a long time. |
Yes, I'm on it |
Overall I think this PR is a good improvement on the current situation but there are still things which don't work correctly, for example: object Test {
def one[T](x: T)(implicit ev: T): Nothing = ???
def test = {
implicit val ii: Int = 42
one(10)
// error: ambiguous implicits: both getter StringCanBuildFrom in object Predef$
// and getter NothingClassTag in object DottyPredef$ match type Any of
// parameter ev of method one in object Test$
}
} I'm working on fixing this(it should only require us to be less eager to instantiate in |
OK, should we get this in then and do further improvements in a separate PR? |
Yes, sounds good to me. |
Change inference scheme to look at how type variables are constrained
instead of just taking into account the variance in which they occur in some
type. Review by @smarter.