-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[ty] Infer arguments of generic calls with declared type context #21071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| b: TD | None = f([{ "x": 0 }, { "x": 1 }]) | ||
| # TODO: Narrow away the `None` here. | ||
| reveal_type(b) # revealed: TD | None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really tricky. We infer the list argument to be list[TD | None] due to the annotated type of T being TD | None. I'm not sure there's a clear heuristic to filter away the None here (and similarly for any other type than None).
Diagnostic diff on typing conformance testsNo changes detected when running ty on typing conformance tests ✅ |
79171d2 to
72acb7e
Compare
|
| if let Some(generic_context) = overload.signature.generic_context | ||
| && let Some(return_ty) = overload.signature.return_ty | ||
| && let Some(declared_return_ty) = call_expression_tcx.annotation | ||
| { | ||
| let mut builder = | ||
| SpecializationBuilder::new(db, generic_context.inferable_typevars(db)); | ||
|
|
||
| let _ = builder.infer(return_ty, declared_return_ty); | ||
| let specialization = builder.build(generic_context, call_expression_tcx); | ||
|
|
||
| // Note that we are not necessarily "preferring the declared type" here, as the | ||
| // type context will only be preferred during the inference of this expression | ||
| // by the same heuristics we use for the inference of the outer generic call. | ||
| parameter_type = parameter_type.apply_specialization(db, specialization); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm following the control flow correctly, this has to happen here because we haven't yet inferred a specialization in the Bindings, since that doesn't happen until you call check_types. And we need this specialization applied to the parameter type in order to have a type context to use when infering the argument types. Does that sound right?
My worry here is that this specialization that you're building isn't guaranteed to line up with the one that check_types will infer — or at least, it won't once we have the better constraint solver, where the inferred specialization can be influenced by constraints coming from e.g. other arguments.
Ideally we'd find a way to be able to reuse the specialization that Bindings is already building up for us. Should we consider moving the specialization inference step so that it happens at the end of match_parameters, instead of at the beginning of check_types? Or as an explicit third step in between the two?
That might be something we can queue up as follow-on work, in order to land this. But I'd like to at least think through what would be involved, since I think might add more blockers to getting the new constraint solver landed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm following the control flow correctly, this has to happen here because we haven't yet inferred a specialization in the Bindings, since that doesn't happen until you call check_types. And we need this specialization applied to the parameter type in order to have a type context to use when infering the argument types. Does that sound right?
Yeah that's correct.
Ideally we'd find a way to be able to reuse the specialization that Bindings is already building up for us. Should we consider moving the specialization inference step so that it happens at the end of match_parameters, instead of at the beginning of check_types? Or as an explicit third step in between the two?
I'm not exactly sure how that would work. Constructing the complete binding specialization is only possible once we have the types of all arguments, which we want to infer using the type context — so there's a bit of a circular dependency here.
Consider:
def f[T](x: list[T], y: T) -> list[T]:
return x
def lst[T](x: T) -> list[T]:
return [x]
def _(x: int, y: int | str):
z: list[int | str] = f(lst(x), y) # error: expected list[int | str], found list[int]In order to infer a specialization for f, we must first infer each of its arguments. The call to lst(x) has no way of specializing to int | str except by using the type-context list[int | str] to infer a more assignable type, such that it passes check_types once the complete specialization is inferred. Note that without the type-context, pyright also fails to type-check, which suggests it's using the type context in a similar way to this PR.
The question then is, can using the extra type-context lead to a less assignable type (as I think you are suggesting)? I'm not sure, as the specialized return type must be assignable to annotated type of the call expression. The only issue I see is the question of how to narrow the type context such that we don't overly widen the inferred type of the call, e.g.
def f[T](x: list[T]) -> T:
return x[0]
def lst[T](x: T) -> list[T]:
return [x]
def _(x: int, y: int | str):
z: int | str = f(lst(x))
reveal_type(z) # should be int, not int | str|
Superceded by #21210. |
Summary
Resolves astral-sh/ty#1356.