-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Polymorphic inference: support for parameter specifications and lambdas #15837
Polymorphic inference: support for parameter specifications and lambdas #15837
Conversation
@@ -6456,7 +6456,7 @@ P = ParamSpec("P") | |||
R = TypeVar("R") | |||
|
|||
@overload | |||
def func(x: Callable[Concatenate[Any, P], R]) -> Callable[P, R]: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forgot to check why this error disappeared. I will take a look at it if you think it is important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how important this is, though the error is correct. Maybe (I haven't looked through the code yet) you're saying the P
here is the same as the P
in the other overload?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the mypy_primer
output, I see there is something suspicious going on with overloads, so I will dig a bit deeper into this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I fixed one of the new overload errors in mypy_primer
, this one however looks quite tricky to bring back. But note that when I enable --new-type-inference
(also during unification manually, that flag doesn't control it) the error re-appears. So I guess we can just wait until it will be on by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this doesn't look like a serious regression.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just looked through tests to get an idea of what's going on, so ignore my questions/comments if they don't make sense!
# This is counter-intuitive but looks correct, dec matches itself only if P is empty | ||
reveal_type(dec(dec)) # N: Revealed type is "def [T, S] (T`11, f: def () -> def (T`11) -> S`12) -> S`12" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to provide an alternative inference, I popped open pyright (which last time I played around with paramspec also broke, so take the result with a massive grain of salt) and it spit out this type:
Type of "dec(dec)" is "(T@dec, f: (f: (**P@dec) -> ((T@dec) -> S@dec)) -> ((T@dec) -> S@dec)) -> S@dec"
I struggle to read that, but it looks like it's keeping a P
somehow and also taking T
and S
multiple times. I agree with the inference mypy gives though.
(is it too late to say I prefer Concatenate[T, P]
over [T, **P]
...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I am fine with [T, **P]
(it also somehow hints at its internal representation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw after some more thinking I think technically we can infer not just empty P
, but P
that can be empty (e.g. contains only default arguments). But this is not really related to this PR and may be done later if people will ask for this.
reveal_type(dec(either)) # N: Revealed type is "def [T] (x: builtins.list[T`4], y: builtins.list[T`4]) -> T`4" | ||
[builtins fixtures/list.pyi] | ||
|
||
[case testInferenceAgainstGenericParamSpecPopOff] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, thank you!
reveal_type(dec(id)) # N: Revealed type is "def [T] (x: T`2) -> builtins.list[T`2]" | ||
reveal_type(dec(either)) # N: Revealed type is "def [T] (x: T`4, y: T`4) -> builtins.list[T`4]" | ||
reveal_type(dec(pair)) # N: Revealed type is "def [U, V] (x: U`-1, y: V`-2) -> builtins.list[Tuple[U`-1, V`-2]]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pointed this out before but I'll move that comment here cause it's significantly more visible here ^^:
I don't really like how the typevars switched from a negative to positive id. While I can't imagine this has any negative (heh) impact, it annoys the pedant in me. (Specifically, positive ids are supposedly for classes, which these are not).
But moreso, here specifically, I can see that we're treating directly held typevar (the return type) differently to a typevar stuffed inside a generic type (Tuple[U, V]
) (...though that's not an Instance
). This feels like special casing that might bite later. Maybe it's fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is (much) more complicated than that. Positive ids are also generated by new_unification_variable()
. Also there is such things as meta-level (and it is 1 for the unification variables, while 0 for regular TypeVar
s, i.e. those that are bound in current scope). Also we use freshen/freeze cycles for reasons other than inference, etc.
This is all indeed unnecessary complicated, but this PR doesn't really adds much to it, and cleaning it up would be quite tricky for a very modest benefit. There is one known problem caused by using these semi-global numeric ids -- accidental id clashes (and half of the complications are to avoid them), but it seems that the consensus is that best solution to fix this is to introduce namespaces, like we already did for class type variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, alright. I was basing my assumption based on the docstring for TypeVarId
xd
test-data/unit/check-generics.test
Outdated
reveal_type(dec3(lambda x: x[0])) # N: Revealed type is "def [S] (S`5) -> S`5" | ||
reveal_type(dec4(lambda x: [x])) # N: Revealed type is "def [S] (S`7) -> S`7" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you include tests that show incorrect usage of dec3
/dec4
erroring? i.e. lambda x: x
for both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the test cases, but note that dec3(lambda x: x)
is actually a valid call (see example body I added to see why).
def g(x: int) -> List[T]: | ||
return [f(x)] * x | ||
return g |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this function body particularly important?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it would be good to have at least few tests with bound type variables around (just in case), plus this is a good reminder/hint for a reader about why the test behaves as it does (see e.g. I added body for dec3
in another comment).
test-data/unit/check-inference.test
Outdated
main:4: error: Cannot infer type argument 1 of "f" | ||
main:4: error: Argument 2 to "f" has incompatible type "int"; expected "str" | ||
main:5: error: Need type annotation for "y" | ||
main:5: error: Argument 2 to "f" has incompatible type "int"; expected "str" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use the comment-style for test outputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I updated the test.
@@ -6456,7 +6456,7 @@ P = ParamSpec("P") | |||
R = TypeVar("R") | |||
|
|||
@overload | |||
def func(x: Callable[Concatenate[Any, P], R]) -> Callable[P, R]: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how important this is, though the error is correct. Maybe (I haven't looked through the code yet) you're saying the P
here is the same as the P
in the other overload?
reveal_type(func(spam)) # N: Revealed type is "def (*Any, **Any) -> builtins.str" | ||
reveal_type(func(spam)) # N: Revealed type is "def (*Any, **Any) -> Any" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised about this change (just purely looking at output): mypy seems to think spam
is going to be using a single overload of func
, and both return the return type unchanged. How is str
turning into Any
...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one I understand, there is a rule for overloads that if
- arguments match more than one overload variant
- the multiple match is caused by
Any
types in argument types - return types in matched overloads are not all equivalent
then the inferred return type becomes Any
. IIUC all three conditions are satisfied in this case, so it is probably OK.
@@ -1547,5 +1553,5 @@ U = TypeVar("U") | |||
def dec(f: Callable[P, T]) -> Callable[P, List[T]]: ... | |||
def test(x: U) -> U: ... | |||
reveal_type(dec) # N: Revealed type is "def [P, T] (f: def (*P.args, **P.kwargs) -> T`-2) -> def (*P.args, **P.kwargs) -> builtins.list[T`-2]" | |||
reveal_type(dec(test)) # N: Revealed type is "def [U] (x: U`-1) -> builtins.list[U`-1]" | |||
reveal_type(dec(test)) # N: Revealed type is "def [T] (x: T`2) -> builtins.list[T`2]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a bit kinda like this?:
class A: ...
TA = TypeVar("T", bound=A)
def test_with_bound(x: TA) -> TA: ...
reveal_type(dec(test_with_bound))
dec(test_with_bound)(0) # should error
dec(test_with_bound)(A()) # should be AOK
? I'm a bit concerned about the replacing of type variables here, though I do remember seeing something in an earlier PR about updating bounds so that's probably already handled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, upper bounds should be ~decently handled. I added the test.
@A5rocks Thanks for the comments! It is too late now, so I am going to take a look at them tomorrow. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
When testing this PR, I have come across the following new error compared to v1.5.0: from functools import partial
from typing import Any, Protocol, TypeVar
import numpy as np
import numpy.typing as npt
N = TypeVar("N", bound=np.number[Any])
class Fun(Protocol):
def __call__(self, a: npt.NDArray[N], axis: int) -> npt.NDArray[N]:
...
def return_bool() -> bool:
return False
fun: Fun
if return_bool():
fun = np.max
else:
fun = partial(np.mean, dtype=np.float64)
This does not happen with Is that an intended error that was missed before, or is that a false positive? I am unsure how strictly |
@bersbersbers I am not 100% sure, but it may be a genuine error. Looking at the error message, it looks like it was expecting a function that returns whatever it takes, e.g. if function is given an array of integers, it will return an array of integers; but you are assigning it a function that always returns an array of floats (even if it is given an array of integers). Such an assignment is not safe. |
You are right. Another problem is that |
Hello, I was testing this out and found that pop-off logic seems to be dropping type vars if they have a bound. I have attached the code I was using slightly simplified, but I will probably simplify this more tomorrow or something. Messy reproduction code
import typing
import attrs
import functools
from typing import Generic, Callable, Concatenate
T = typing.TypeVar("T")
V = typing.TypeVar("V")
P = typing.ParamSpec("P")
OriginalArgs = typing.ParamSpec("OriginalArgs")
@attrs.frozen()
class Command(Generic[OriginalArgs]):
...
def command() -> Callable[[Callable[P, object]], Command[P]]:
...
CommandT1 = typing.TypeVar("CommandT1", bound=Command[...]) # try removing this `bound`!
CommandT2 = typing.TypeVar("CommandT2", bound=Command[...]) # same here!
def command_builder() -> Callable[[Callable[Concatenate[CommandT1, P], CommandT2]], Callable[P, Callable[[CommandT1], CommandT2]]]:
def command_builder_inner(real_command_builder: Callable[Concatenate[CommandT1, P], CommandT2]) -> Callable[P, Callable[[CommandT1], CommandT2]]:
def new_command_builder(*args: P.args, **kwargs: P.kwargs) -> Callable[[CommandT1], CommandT2]:
def inner_inner_inner(command: CommandT1) -> CommandT2:
return real_command_builder(command, *args, **kwargs)
return inner_inner_inner
return new_command_builder
return command_builder_inner
@command_builder()
def add_builder(command: Command[OriginalArgs]) -> Command[OriginalArgs]:
...
@add_builder()
@command()
async def add(x: int, y: int) -> None:
...
reveal_type(add)
# ^ has `Revealed type is "repro.Command[[*Any, **Any]]"` if bound
# ^ has `Revealed type is "repro.Command[[x: builtins.int, y: builtins.int]]"` if not bound |
This comment has been minimized.
This comment has been minimized.
Diff from mypy_primer, showing the effect of this PR on open source code: Expression (https://github.com/cognitedata/Expression)
+ expression/extra/parser.py:522: error: Cannot infer type argument 1 of "pipe" [misc]
Tanjun (https://github.com/FasterSpeeding/Tanjun)
- tanjun/dependencies/limiters.py:1302: error: Argument 1 to "add_pre_execution" of "Hooks" has incompatible type "CooldownPreExecution"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1302: error: Argument 1 to "add_pre_execution" of "Hooks" has incompatible type "CooldownPreExecution"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1302: error: Argument 1 to "add_pre_execution" of "Hooks" has incompatible type "CooldownPreExecution"; expected "Callable[[Any, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
- tanjun/dependencies/limiters.py:1302: error: Argument 1 to "add_post_execution" of "Hooks" has incompatible type "CooldownPostExecution"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1302: error: Argument 1 to "add_post_execution" of "Hooks" has incompatible type "CooldownPostExecution"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1302: error: Argument 1 to "add_post_execution" of "Hooks" has incompatible type "CooldownPostExecution"; expected "Callable[[Any, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
- tanjun/dependencies/limiters.py:1796: error: Argument 1 to "add_pre_execution" of "Hooks" has incompatible type "ConcurrencyPreExecution"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1796: error: Argument 1 to "add_pre_execution" of "Hooks" has incompatible type "ConcurrencyPreExecution"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1796: error: Argument 1 to "add_pre_execution" of "Hooks" has incompatible type "ConcurrencyPreExecution"; expected "Callable[[Any, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
- tanjun/dependencies/limiters.py:1796: error: Argument 1 to "add_post_execution" of "Hooks" has incompatible type "ConcurrencyPostExecution"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1796: error: Argument 1 to "add_post_execution" of "Hooks" has incompatible type "ConcurrencyPostExecution"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
+ tanjun/dependencies/limiters.py:1796: error: Argument 1 to "add_post_execution" of "Hooks" has incompatible type "ConcurrencyPostExecution"; expected "Callable[[Any, VarArg(Any), KwArg(Any)], Optional[Coroutine[Any, Any, None]]]" [arg-type]
- tanjun/checks.py:624: error: Argument 2 to "_optional_kwargs" has incompatible type "DmCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:624: error: Argument 2 to "_optional_kwargs" has incompatible type "DmCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:690: error: Argument 2 to "_optional_kwargs" has incompatible type "GuildCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:690: error: Argument 2 to "_optional_kwargs" has incompatible type "GuildCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:752: error: Argument 2 to "_optional_kwargs" has incompatible type "NsfwCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:752: error: Argument 2 to "_optional_kwargs" has incompatible type "NsfwCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:814: error: Argument 2 to "_optional_kwargs" has incompatible type "SfwCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:814: error: Argument 2 to "_optional_kwargs" has incompatible type "SfwCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:876: error: Argument 2 to "_optional_kwargs" has incompatible type "OwnerCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:876: error: Argument 2 to "_optional_kwargs" has incompatible type "OwnerCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:932: error: Argument 2 to "_add_to_command" has incompatible type "AuthorPermissionCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:932: error: Argument 2 to "_add_to_command" has incompatible type "AuthorPermissionCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:989: error: Argument 2 to "_add_to_command" has incompatible type "OwnPermissionCheck"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:989: error: Argument 2 to "_add_to_command" has incompatible type "OwnPermissionCheck"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:1002: error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1009: error: Overloaded function signature 3 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1009: error: Overloaded function signature 3 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1016: error: Overloaded function signature 4 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1016: error: Overloaded function signature 4 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1016: error: Overloaded function signature 4 will never be matched: signature 3's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1093: error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1103: error: Overloaded function signature 3 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1103: error: Overloaded function signature 3 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1113: error: Overloaded function signature 4 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1113: error: Overloaded function signature 4 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1113: error: Overloaded function signature 4 will never be matched: signature 3's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1150: error: Argument 2 to "_add_to_command" has incompatible type "Callable[[_ContextT_contra], Coroutine[Any, Any, bool]]"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:1150: error: Argument 2 to "_add_to_command" has incompatible type "Callable[[Context], Coroutine[Any, Any, bool]]"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:1245: error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1259: error: Overloaded function signature 3 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1259: error: Overloaded function signature 3 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1273: error: Overloaded function signature 4 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1273: error: Overloaded function signature 4 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1273: error: Overloaded function signature 4 will never be matched: signature 3's parameter type(s) are the same or broader [misc]
- tanjun/checks.py:1339: error: Argument 2 to "_add_to_command" has incompatible type "_AnyCallback[<nothing>]"; expected "Callable[[_ContextT_contra, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
+ tanjun/checks.py:1339: error: Argument 2 to "_add_to_command" has incompatible type "_AnyCallback[Any]"; expected "Callable[[Context, VarArg(Any), KwArg(Any)], Union[Coroutine[Any, Any, bool], bool]]" [arg-type]
- tanjun/checks.py:1339: note: "_AnyCallback[<nothing>].__call__" has type "Callable[[<nothing>, DefaultNamedArg(Optional[AbstractLocaliser], 'localiser')], Coroutine[Any, Any, bool]]"
... (truncated 7 lines) ...
discord.py (https://github.com/Rapptz/discord.py)
- discord/ext/commands/context.py:613: error: Incompatible types in assignment (expression has type "Callable[[Cog], Coroutine[Any, Any, None]]", variable has type "Callable[[Mapping[Cog | None, list[Command[Any, [VarArg(Any), KwArg(Any)], Any]]]], Coroutine[Any, Any, None]]") [assignment]
+ discord/ext/commands/context.py:613: error: Argument 1 to "wrap_callback" has incompatible type "Callable[[Cog], Coroutine[Any, Any, None]]"; expected "Callable[[Mapping[Cog | None, list[Command[Any, [VarArg(Any), KwArg(Any)], Any]]]], Coroutine[Any, Any, None]]" [arg-type]
- discord/ext/commands/context.py:616: error: Incompatible types in assignment (expression has type "Callable[[Group[Any, [VarArg(Any), KwArg(Any)], Any]], Coroutine[Any, Any, None]]", variable has type "Callable[[Mapping[Cog | None, list[Command[Any, [VarArg(Any), KwArg(Any)], Any]]]], Coroutine[Any, Any, None]]") [assignment]
+ discord/ext/commands/context.py:616: error: Argument 1 to "wrap_callback" has incompatible type "Callable[[Group[Any, [VarArg(Any), KwArg(Any)], Any]], Coroutine[Any, Any, None]]"; expected "Callable[[Mapping[Cog | None, list[Command[Any, [VarArg(Any), KwArg(Any)], Any]]]], Coroutine[Any, Any, None]]" [arg-type]
- discord/ext/commands/context.py:619: error: Incompatible types in assignment (expression has type "Callable[[Command[Any, [VarArg(Any), KwArg(Any)], Any]], Coroutine[Any, Any, None]]", variable has type "Callable[[Mapping[Cog | None, list[Command[Any, [VarArg(Any), KwArg(Any)], Any]]]], Coroutine[Any, Any, None]]") [assignment]
+ discord/ext/commands/context.py:619: error: Argument 1 to "wrap_callback" has incompatible type "Callable[[Command[Any, [VarArg(Any), KwArg(Any)], Any]], Coroutine[Any, Any, None]]"; expected "Callable[[Mapping[Cog | None, list[Command[Any, [VarArg(Any), KwArg(Any)], Any]]]], Coroutine[Any, Any, None]]" [arg-type]
+ discord/ext/commands/core.py:1729: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this is a big improvement to ParamSpec support in particular. I didn't have time for a deep review, but based on a quick look everything looks good, and it's totally fine to leave some TODOs (it's not like the original implementation covered all edge cases).
I have one optional request: Can you link to some issues that we can close in the PR description, assuming there are some?
I encourage others to also have a look at the changes, but this seems ready to merge now.
@@ -6456,7 +6456,7 @@ P = ParamSpec("P") | |||
R = TypeVar("R") | |||
|
|||
@overload | |||
def func(x: Callable[Concatenate[Any, P], R]) -> Callable[P, R]: ... # E: Overloaded function signatures 1 and 2 overlap with incompatible return types |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this doesn't look like a serious regression.
@@ -3035,3 +3036,230 @@ reveal_type(dec1(id2)) # N: Revealed type is "def [S in (builtins.int, builtins | |||
reveal_type(dec2(id1)) # N: Revealed type is "def [UC <: __main__.C] (UC`5) -> builtins.list[UC`5]" | |||
reveal_type(dec2(id2)) # N: Revealed type is "def (<nothing>) -> builtins.list[<nothing>]" \ | |||
# E: Argument 1 to "dec2" has incompatible type "Callable[[V], V]"; expected "Callable[[<nothing>], <nothing>]" | |||
|
|||
[case testInferenceAgainstGenericLambdas] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's amazing how many new cases we can now handle!
Yes, there is probably ~dozen or so issues that will be fixed by this. But the situation with issues is a bit messy now, there is a bunch that was closed, but then we didn't re-open after @A5rocks's PR was reverted. I checked those should be fixed by this PR as well. I wanted to merge this first, and then go over every issue with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filler comments. Looks good!
extra_vars = [] | ||
for arg in ctx.arg_types: | ||
meta_vars = [tv for tv in get_all_type_vars(arg) if tv.id.is_meta_var()] | ||
extra_vars.extend([tv for tv in meta_vars if tv not in extra_vars]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think extra_vars
could be a set maybe? That would mean an IMO simpler comprehension.
I'm also not sure why ctx.variables
is guaranteed to not include these new variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC I use this logic with lists for variables here (and in few other places) to have stable order. Otherwise tests will randomly fail on reveal_type()
(and it is generally good to have predictable stable order for comparison purposes).
for tv in get_all_type_vars(arg): | ||
if isinstance(tv, ParamSpecType): | ||
normalized: TypeVarLikeType = tv.copy_modified( | ||
flavor=ParamSpecFlavor.BARE, prefix=Parameters([], [], []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why drop the prefix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have something like Callable[[P], Foo[Concatenate[T, P]]]
, then get_all_type_vars()
will return P
twice, so we need to normalize.
# instead of this lock-step loop over argument types. This identical | ||
# logic should be used in 5 places: in Parameters vs Parameters | ||
# inference, in Instance vs Instance inference for prefixes (two | ||
# branches), and in Callable vs Callable inference (two branches). | ||
for t, a in zip(template_args, cactual_args): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the TODO referring to something like #15320 (...which I still need to fix)? That modifies the subtypes.py
file but that sounds like what you're referring to...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is about #702 (wow, three-digit issue). I actually wanted to work on this at some point soon. I am not sure I will be able to handle all case, but I guess it should be relatively straightforward to cover ~95% of cases.
arg_kinds=t.arg_kinds[:-2] + prefix.arg_kinds + t.arg_kinds[-2:], | ||
arg_names=t.arg_names[:-2] + prefix.arg_names + t.arg_names[-2:], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a reason for this ordering? I recall not being able to come up with an ordering (and now that I look at my own code here I'm not sure what I was even thinking, using both param_spec
and repl
...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What effectively happens is we have Concatenate[X, P]
, where P = Concatenate[Y, Q]
, and for positional arguments, nested concatenates simply flatten. I didn't think much about non-positional arguments, but IIUC for them order is not that important and some other parts of the code already work well only for positional arguments.
@@ -380,7 +359,14 @@ def transitive_closure( | |||
remaining = set(constraints) | |||
while remaining: | |||
c = remaining.pop() | |||
if isinstance(c.target, TypeVarType) and c.target.id in tvars: | |||
# Note that ParamSpec constraint P <: Q may be considered linear only if Q has no prefix, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I admit I have no idea what's going on here! Maybe I should actually read the recommended "dragon book" or something to get up to speed...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually don't recommend "dragon book". To get good understanding of how PLs work, I would rather recommend SICP + TAPL. (But also be aware some of the things I use here are quite new, so may not be covered in textbooks yet).
type.copy_modified(ret_type=UninhabitedType()), | ||
target.copy_modified(ret_type=UninhabitedType()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of this, is it possible to use Parameters
-based inference instead? (I don't really like the whole "copy with an uninhabited return type" thing here, though it doesn't harm anything)
... unless they're majorly different, in which case someone should PR in making them similar!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting idea, currently there are some differences (e.g. Parameters
that contain ParamSpec
themselves are not handled). But I would rather put this in a separate PR.
return tp.accept(TypeVarExtractor(include_all=True)) | ||
|
||
|
||
class TypeVarExtractor(TypeQuery[List[TypeVarLikeType]]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a visit function for TypeVarTuple
too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be in a dedicated PR for TypeVarTuple
(which will be last PR in this series, well to be precise in this season, there are still few things left, I will open follow up issues for them).
if isinstance(t, ParamSpecType): | ||
assert not t.prefix.arg_types | ||
# TODO: should we assert that only ARG_STAR contain ParamSpecType? | ||
# See testParamSpecJoin, that relies on passing e.g `P.args` as plain argument. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More generally, I want to put aside some time to fix issues with too-loose validation for ParamSpecs. This is super duper low hanging fruit :P
... Took me a bit but I see what you mean for testParamSpecJoin
. Yeah, IMO that's valid.
This is a third follow-up for #15287 (likely there will be just one more PR, for
TypeVarTuple
s, and few less important items I mentioned in the original PR I will leave for more distant future).After all this PR turned out to be larger than I wanted. The problem is that
Concatenate
support forParamSpec
was quite broken, and this caused many of my tests fail. So I decided to include some major cleanup in this PR (I tried splitting it into a separate PR but it turned out to be tricky). After all, if one ignores added tests, it is almost net zero line count.The main problems that I encountered are:
ParamSpecType
were: anotherParamSpecType
,Parameters
, andCallableType
(and alsoAnyType
andUninhabitedType
but those seem to be handled trivially). HavingCallableType
in this list caused various missed cases, bogusget_proper_type()
s, and was generally counter-intuitive.Concatenate
in two different forms: as a prefix forParamSpecType
(used mostly for instances), and as separate argument types (used mostly for callables). The problem is that some parts of the code were implicitly relying on it being in one or the other form, while some other code uncontrollably switched between the two.I propose to fix this by introducing some simplifications and rules (some of which I enforce by asserts):
ParamSpecType
areParamSpecType
andParameters
.ParamSpecType
appears in a callable it must have an emptyprefix
.Parameters
cannot contain otherParameters
(and ideally alsoParamSpecType
s) among argument types.Concatenate
to common representation (because both callables and instances may appear in the same expression). Using theParamSpecType
representation withprefix
looks significantly simpler (especially in solver).Apart from this actual implementation of polymorphic inference is simple/straightforward, I just handle the additional
ParamSpecType
cases (in addition toTypeVarType
) for inference, for solver, and for application. I also enabled polymorphic inference for lambda expressions, since they are handled by similar code paths.Some minor comments:
TypeVar
id clash).--new-type-inference
because there error messages are slightly different, and so it is easier for me to test global flip toTrue
locally.mypy_primer
output will be particularly bad.cc @A5rocks (for some reason cannot add you as a reviewer).