-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dig deeper on avoidance #8900
Comments
Here's one problematic case: class Inv[T](val elem: T)
object Test {
def unwrap[Outer](inv: Inv[Outer]): Outer = inv.elem
def wrap[Inner](i: Inner): Inv[Inner] = new Inv(i)
val a = unwrap({
class Local
val local = new Local
wrap(local)
})
} The inferred type for
(I'm also growing skeptical about the usage of nesting levels in general: because nesting forms a tree, the fact that a symbol has a smaller nesting level does not necessarily mean it's in scope currently, it might just be in a different branch of the nesting tree, but I haven't tried to write a proof of concept for this yet). |
Welp, this is going to be trickier than I thought. My plan was to run avoidance on all our open constraints whenever some symbols went out of scope, but I now realize that this isn't enough: these local symbols can have bad bounds, meaning we can derive absurd constraints from them in a way that we cannot recover from just using avoidance, for example the following crashes at runtime: trait Base {
type M
}
trait A {
type M >: Int | String
}
trait B {
type M <: Int & String
}
object Test {
def foo[T](z: T, x: A & B => T): T = z
def main(args: Array[String]): Unit = {
val a = foo(1, x => (??? : x.M))
val b: String = a // ClassCastException
}
} Because our constraints successively look like:
This particular example could be fixed by checking if the constraints we end up with after typing the lambda entail the constraints we had before, but I'm not convinced this is enough. More radically we could simply throw away all constraints computed locally, then constraint the result type of the lambda after avoidance to be a subtype of its expected type to recover constraints that are not tainted by reasoning-via-bad-bounds. This would also avoid the need for running avoidance on all constraints as I originally planned to do. |
Why does this even type check? We have |
I tried to show that in my description of the constraints involved: We start with |
Ok, I see. Conceptually, if you were to go with a level-based approach (which is what I've been experimenting with privately), you should not allow registering a bound Anyway, I know existentials and type projections are being phased out (as opposed to fixed). This is just some intuition nuggets in case it gives you new ideas on how to solve this! I think proper type-theoretic existentials (not Scala 2 ones!) are a good way of thinking about type avoidance. |
I don't think this is enough in general, it seems to me that the mere existence of a value with bad bounds in scope means that you can taint your global constraints, even if that particular value never appear in those constraints. Thinking about it more, instead of introducing a new variable with bad bounds, we can also use the GADT logic to refine the bounds of an existing type, my first attempt was: trait Base {
type M
}
trait A extends Base {
type M >: Int | String
}
trait B extends Base {
type M <: Int & String
}
enum Foo[T] {
case One extends Foo[String]
case Two(val elem: A & B) extends Foo[elem.M]
}
import Foo._
object Test {
def id[I](x: I): I = x
def foo[T](x: Foo[T], y: T) =
// I'd like to infer ?I = T but I get ?I = Any
id(x match
case One =>
1 // infer ?I >: 1
case Two(_) =>
y: T // infer ?I >: T because we're reasoning with bad bounds
)
def main(args: Array[String]): Unit =
val x: String = foo(One, "")
println(x)
} This almost worked, we did end up with the incorrect Finally, I remembered that if the expected type of a match expression is a match type, we type it with that match type instead of the lub of the cases, and I managed to produce this monstrosity which compiles (with one warning about an unreachable case) and crashes with a ClassCastException: trait Base:
type M
trait A extends Base:
type M >: Int | String
trait B extends Base:
type M <: Int & String
sealed trait Foo[T]
class One[T]() extends Foo[T]
class Two(val elem: A & B) extends One[elem.M]
object Test:
type Const[X, Y] <: Y = X match
case AnyRef => Y
case Two => Y
def id[A, I](a: A, x: Const[A, I]): I = x
def foo[T](x: Foo[T], y: T) = // The inferred result type is `T` !
id(x, x match
case _: AnyRef =>
1 // infer ?I >: 1
case _: Two =>
y: T // infer ?I >: T because we're reasoning with bad bounds
)
def main(args: Array[String]): Unit =
val x: String = foo(new One[String], "") // ClassCastException: java.lang.Integer cannot be cast to java.lang.String
println(x) |
@Blaisorblade, am I correct to assume that the literature on DOT has not considered the impact of bad bounds on type inference so far? |
So in that case, the lack of transitivity would be a feature, not a bug :^) But I see your point with GADT reasoning messing thing up in that case. If I understand your example correctly, in the If I go back to my levels idea, I'd say that |
I fear you're right; all I can think of is:
|
Thanks for the references @Blaisorblade!
Yes, that's right!
I'm not sure how that would work in practice: if T already appears in constraints outside of the case, then if you locally change its nesting level, your constraints aren't valid anymore. On the other hand, you could create a local T1 and use that, but then for GADT reasoning to work, you still want any local subtype check involving |
I don't really know better. my work on algorithmic (sub)typing is all based on removing subtyping reflection, but indeed, how to handle subtyping relfection is an interesting problem which I would like to come back someday in the future. It is not clear to me that within D<:, whether subtyping reflection is already undecidable, but with higher kinded types as in Scala, subtyping reflection itself induces combinatory logic which is readily undecidable. it will be particularly interesting if there is a syntax-directed algorithm which can solve part of subtyping reflection. |
Yeah, this is the kind of things I end up doing. Create a
Yes, this can be done by replacing
But is this sound? I have trouble believing simply throwing away constraints is not going to create problems. For instance, what about something like |
Can you really? Suppose you have an existing constraint
In present day Scala this isn't an issue because arguments to lambdas are always fully instantiated before typing their body so yes :). Now if we had #9076 (which was already on shaky grounds) the situation would be more complicated indeed, so this might be a death knell for #9076 (or maybe we can still salvage some of it, I haven't spent much time thinking about it). |
So, let's say we have This level update upon going through lower-level type variables makes sense if you think about the fact that
This is not very reassuring, though. I would not be surprised at all if there were many corner cases where things can still go sideways. It does not seem like a good idea to rest the soundness of the type system on an "accidental" type inference limitation :^)
Well, I think the current GADT reasoning implementation, its interactions with type inference, and the propose approach of simply discarding constraints wholesale, seem to be on vastly more shaky grounds than the simple idea of inferring lambda parameter types! In any case, this discussion raises many interesting questions! |
Ah I see, so the constraint propagation for a given variable is done in a context where more deeply nested variables aren't visible basically. Meanwhile, I've also realized that GADT constraints can propagate outwards incorrectly even without bad bounds being present /cc @abgruszecki : sealed trait Foo[T]
class One[T]() extends Foo[T]
class Two() extends One[Int]
class Contra[-X]
object Test:
type Const[X, Y] <: Y = X match
case AnyRef => Y
case Two => Y
def id[A, I](a: A, x: Const[A, Contra[I]]): Contra[I] = x
def foo[T](x: Foo[T], y: T) = // The inferred result type is `Contra[1]` !
id(x, x match
case _: AnyRef =>
new Contra[T] // infer ?I <: T
case _: Two =>
// T := Int
new Contra[1] // Infer ?I <: 1
) Besides the approaches we've discussed above, it's also worth mentioning that GHC solves this problem by having implication constraints (so instead of registering |
Congrats, @smarter ! |
Reopening as we had to temporarily turn off level-checking to avoid regressions due to interactions with other compiler parts that will take a while to sort out: #15343 |
#8867 fixed a tricky avoidance problem #8861 by introducing new techniques for preventing closure types from capturing variables they are not supposed to see.
There are several areas where our understanding of this issue is still incomplete. It would be good to follow this up with test cases, code fixes and possibly formalizations, as the subject matter is extremely slippery and the requirements for type inference are extremely stringent.
I left in the code base of #8867 several comments starting with
This should be followed up.
The text was updated successfully, but these errors were encountered: