We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mypy is failing with a bunch of weird type issue like those below.
Not sure what's going on.
torchvision/transforms/_functional_pil.py:56: error: Module has no attribute "FLIP_LEFT_RIGHT" [attr-defined] return img.transpose(Image.FLIP_LEFT_RIGHT) ^~~~~~~~~~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:64: error: Module has no attribute "FLIP_TOP_BOTTOM" [attr-defined] return img.transpose(Image.FLIP_TOP_BOTTOM) ^~~~~~~~~~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:206: error: Incompatible types in assignment (expression has type "ndarray[Any, dtype[Any]]", variable has type "Image") [assignment] img = np.asarray(img) ^~~~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:207: error: No overload variant of "pad" matches argument types "Image", "Tuple[Tuple[int, int], Tuple[int, int]]", "Literal['edge', 'reflect', 'symmetric']" [call-overload] img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_r... ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... torchvision/transforms/_functional_pil.py:207: note: Possible overload variants: torchvision/transforms/_functional_pil.py:207: note: def [_SCT <: generic] pad(array: Union[_SupportsArray[dtype[_SCT]], _NestedSequence[_SupportsArray[dtype[_SCT]]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: Literal['constant', 'edge', 'linear_ramp', 'maximum', 'mean', 'median', 'minimum', 'reflect', 'symmetric', 'wrap', 'empty'] = ..., *, stat_length: Optional[Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]]] = ..., constant_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., end_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., reflect_type: Literal['odd', 'even'] = ...) -> ndarray[Any, dtype[_SCT]] torchvision/transforms/_functional_pil.py:207: note: def pad(array: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: Literal['constant', 'edge', 'linear_ramp', 'maximum', 'mean', 'median', 'minimum', 'reflect', 'symmetric', 'wrap', 'empty'] = ..., *, stat_length: Optional[Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]]] = ..., constant_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., end_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., reflect_type: Literal['odd', 'even'] = ...) -> ndarray[Any, dtype[Any]] torchvision/transforms/_functional_pil.py:207: note: def [_SCT <: generic] pad(array: Union[_SupportsArray[dtype[_SCT]], _NestedSequence[_SupportsArray[dtype[_SCT]]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: _ModeFunc, **kwargs: Any) -> ndarray[Any, dtype[_SCT]] torchvision/transforms/_functional_pil.py:207: note: def pad(array: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: _ModeFunc, **kwargs: Any) -> ndarray[Any, dtype[Any]] torchvision/transforms/_functional_pil.py:212: error: Incompatible types in assignment (expression has type "ndarray[Any, dtype[Any]]", variable has type "Image") [assignment] img = np.asarray(img) ^~~~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:214: error: "Image" has no attribute "shape" [attr-defined] if len(img.shape) == 3: ^~~~~~~~~ torchvision/transforms/_functional_pil.py:215: error: No overload variant of "pad" matches argument types "Image", "Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int]]", "Literal['edge', 'reflect', 'symmetric']" [call-overload] img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_r... ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... torchvision/transforms/_functional_pil.py:215: note: Possible overload variants: torchvision/transforms/_functional_pil.py:215: note: def [_SCT <: generic] pad(array: Union[_SupportsArray[dtype[_SCT]], _NestedSequence[_SupportsArray[dtype[_SCT]]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: Literal['constant', 'edge', 'linear_ramp', 'maximum', 'mean', 'median', 'minimum', 'reflect', 'symmetric', 'wrap', 'empty'] = ..., *, stat_length: Optional[Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]]] = ..., constant_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., end_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., reflect_type: Literal['odd', 'even'] = ...) -> ndarray[Any, dtype[_SCT]] torchvision/transforms/_functional_pil.py:215: note: def pad(array: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: Literal['constant', 'edge', 'linear_ramp', 'maximum', 'mean', 'median', 'minimum', 'reflect', 'symmetric', 'wrap', 'empty'] = ..., *, stat_length: Optional[Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]]] = ..., constant_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., end_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., reflect_type: Literal['odd', 'even'] = ...) -> ndarray[Any, dtype[Any]] torchvision/transforms/_functional_pil.py:215: note: def [_SCT <: generic] pad(array: Union[_SupportsArray[dtype[_SCT]], _NestedSequence[_SupportsArray[dtype[_SCT]]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: _ModeFunc, **kwargs: Any) -> ndarray[Any, dtype[_SCT]] torchvision/transforms/_functional_pil.py:215: note: def pad(array: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: _ModeFunc, **kwargs: Any) -> ndarray[Any, dtype[Any]] torchvision/transforms/_functional_pil.py:217: error: "Image" has no attribute "shape" [attr-defined] if len(img.shape) == 2: ^~~~~~~~~ torchvision/transforms/_functional_pil.py:218: error: No overload variant of "pad" matches argument types "Image", "Tuple[Tuple[int, int], Tuple[int, int]]", "Literal['edge', 'reflect', 'symmetric']" [call-overload] img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_r... ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... torchvision/transforms/_functional_pil.py:218: note: Possible overload variants: torchvision/transforms/_functional_pil.py:218: note: def [_SCT <: generic] pad(array: Union[_SupportsArray[dtype[_SCT]], _NestedSequence[_SupportsArray[dtype[_SCT]]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: Literal['constant', 'edge', 'linear_ramp', 'maximum', 'mean', 'median', 'minimum', 'reflect', 'symmetric', 'wrap', 'empty'] = ..., *, stat_length: Optional[Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]]] = ..., constant_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., end_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., reflect_type: Literal['odd', 'even'] = ...) -> ndarray[Any, dtype[_SCT]] torchvision/transforms/_functional_pil.py:218: note: def pad(array: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: Literal['constant', 'edge', 'linear_ramp', 'maximum', 'mean', 'median', 'minimum', 'reflect', 'symmetric', 'wrap', 'empty'] = ..., *, stat_length: Optional[Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]]] = ..., constant_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., end_values: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]] = ..., reflect_type: Literal['odd', 'even'] = ...) -> ndarray[Any, dtype[Any]] torchvision/transforms/_functional_pil.py:218: note: def [_SCT <: generic] pad(array: Union[_SupportsArray[dtype[_SCT]], _NestedSequence[_SupportsArray[dtype[_SCT]]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: _ModeFunc, **kwargs: Any) -> ndarray[Any, dtype[_SCT]] torchvision/transforms/_functional_pil.py:218: note: def pad(array: Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]], pad_width: Union[_SupportsArray[dtype[integer[Any]]], _NestedSequence[_SupportsArray[dtype[integer[Any]]]], int, _NestedSequence[int]], mode: _ModeFunc, **kwargs: Any) -> ndarray[Any, dtype[Any]] torchvision/transforms/_functional_pil.py:242: error: Module has no attribute "BILINEAR" [attr-defined] interpolation: int = Image.BILINEAR, ^~~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:288: error: Module has no attribute "NEAREST" [attr-defined] interpolation: int = Image.NEAREST, ^~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:297: error: Module has no attribute "AFFINE" [attr-defined] return img.transform(output_size, Image.AFFINE, matrix, interpolat... ^~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:304: error: Module has no attribute "NEAREST" [attr-defined] interpolation: int = Image.NEAREST, ^~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:321: error: Module has no attribute "BICUBIC" [attr-defined] interpolation: int = Image.BICUBIC, ^~~~~~~~~~~~~ torchvision/transforms/_functional_pil.py:330: error: Module has no attribute "PERSPECTIVE" [attr-defined] return img.transform(img.size, Image.PERSPECTIVE, perspective_coef... ^~~~~~~~~~~~~~~~~ torchvision/prototype/utils/_internal.py:102: error: Argument 1 to "memoryview" has incompatible type "ndarray[Any, Any]"; expected "Buffer" [arg-type] self._memory = memoryview(tensor.numpy()) ^~~~~~~~~~~~~~ torchvision/io/video_reader.py:28: error: Incompatible types in assignment (expression has type "ImportError", variable has type Module) [assignment] av = ImportError( ^ torchvision/io/video_reader.py:38: error: Incompatible types in assignment (expression has type "ImportError", variable has type Module) [assignment] av = ImportError( ^ torchvision/io/video_reader.py:200: error: Module has no attribute "EOFError" [attr-defined] except av.error.EOFError: ^~~~~~~~~~~~~~~~~ torchvision/io/video_reader.py:231: error: Unsupported operand types for / ("float" and "None") [operator] offset = int(round(time_s / temp_str.time_base)) ^ torchvision/io/video_reader.py:231: note: Right operand is of type "Optional[Fraction]" torchvision/io/video_reader.py:254: error: "Stream" has no attribute "sample_rate" [attr-defined] ...verage_rate if stream.average_rate is not None else stream.sample_rate ^~~~~~~~~~~~~~~~~~ torchvision/io/video_reader.py:256: error: Unsupported operand types for * ("int" and "None") [operator] ... metadata[stream.type]["duration"].append(float(stream.duration * str... ^ torchvision/io/video_reader.py:256: error: No overload variant of "__rmul__" of "Fraction" matches argument type "None" [operator] ... metadata[stream.type]["duration"].append(float(stream.duration * str... ^~~~~~~~~~~~~~~~~~~~~... torchvision/io/video_reader.py:256: note: Possible overload variants: torchvision/io/video_reader.py:256: note: def __rmul__(b, Union[int, Fraction], /) -> Fraction torchvision/io/video_reader.py:256: note: def __rmul__(b, float, /) -> float torchvision/io/video_reader.py:256: note: def __rmul__(b, complex, /) -> complex torchvision/io/video_reader.py:256: error: Unsupported left operand type for * ("None") [operator] ... metadata[stream.type]["duration"].append(float(stream.duration * str... ^~~~~~~~~~~~~~~~~~~~~... torchvision/io/video_reader.py:256: note: Both left and right operands are unions torchvision/datasets/cityscapes.py:191: error: Incompatible types in assignment (expression has type "Image", variable has type "Dict[str, Any]") [assignment] target = Image.open(self.targets[index][i]) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ torchvision/transforms/v2/functional/_geometry.py:116: error: Module "PIL.Image" is not valid as a type [valid-type] def _vertical_flip_image_pil(image: PIL.Image) -> PIL.Image: ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ torchvision/transforms/v2/functional/_geometry.py:116: note: Perhaps you meant to use a protocol matching the module structure? torchvision/transforms/v2/functional/_color.py:733: error: Module "PIL.Image" is not valid as a type [valid-type] def _permute_channels_image_pil(image: PIL.Image.Image, permutation: L... ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... torchvision/transforms/v2/functional/_color.py:733: note: Perhaps you meant to use a protocol matching the module structure? torchvision/transforms/v2/_auto_augment.py:109: error: Argument 1 to "affine" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] image, ^~~~~ torchvision/transforms/v2/_auto_augment.py:122: error: Argument 1 to "affine" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] image, ^~~~~ torchvision/transforms/v2/_auto_augment.py:133: error: Argument 1 to "affine" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] image, ^~~~~ torchvision/transforms/v2/_auto_augment.py:143: error: Argument 1 to "affine" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] image, ^~~~~ torchvision/transforms/v2/_auto_augment.py:152: error: Argument 1 to "rotate" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.rotate(image, angle=magnitude, interpolation=inte... ^~~~~ torchvision/transforms/v2/_auto_augment.py:154: error: Argument 1 to "adjust_brightness" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.adjust_brightness(image, brightness_factor=1.0 + ... ^~~~~ torchvision/transforms/v2/_auto_augment.py:156: error: Argument 1 to "adjust_saturation" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.adjust_saturation(image, saturation_factor=1.0 + ... ^~~~~ torchvision/transforms/v2/_auto_augment.py:158: error: Argument 1 to "adjust_contrast" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.adjust_contrast(image, contrast_factor=1.0 + magn... ^~~~~ torchvision/transforms/v2/_auto_augment.py:160: error: Argument 1 to "adjust_sharpness" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.adjust_sharpness(image, sharpness_factor=1.0 + ma... ^~~~~ torchvision/transforms/v2/_auto_augment.py:162: error: Argument 1 to "posterize" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.posterize(image, bits=int(magnitude)) ^~~~~ torchvision/transforms/v2/_auto_augment.py:165: error: Argument 1 to "solarize" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.solarize(image, threshold=bound * magnitude) ^~~~~ torchvision/transforms/v2/_auto_augment.py:167: error: Argument 1 to "autocontrast" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.autocontrast(image) ^~~~~ torchvision/transforms/v2/_auto_augment.py:169: error: Argument 1 to "equalize" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.equalize(image) ^~~~~ torchvision/transforms/v2/_auto_augment.py:171: error: Argument 1 to "invert" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] return F.invert(image) ^~~~~ torchvision/transforms/v2/_auto_augment.py:325: error: Argument 1 to "get_size" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] height, width = get_size(image_or_video) ^~~~~~~~~~~~~~ torchvision/transforms/v2/_auto_augment.py:414: error: Argument 1 to "get_size" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] height, width = get_size(image_or_video) ^~~~~~~~~~~~~~ torchvision/transforms/v2/_auto_augment.py:483: error: Argument 1 to "get_size" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] height, width = get_size(image_or_video) ^~~~~~~~~~~~~~ torchvision/transforms/v2/_auto_augment.py:575: error: Argument 1 to "get_size" has incompatible type "Union[Tensor, PIL.Image.Image, torchvision.tv_tensors._image.Image, Video]"; expected "Tensor" [arg-type] height, width = get_size(orig_image_or_video) ^~~~~~~~~~~~~~~~~~~ torchvision/transforms/v2/_auto_augment.py:616: error: Incompatible types in assignment (expression has type "Union[Tensor, Image]", variable has type "Tensor") [assignment] aug = self._apply_image_or_video_transform( ^ torchvision/prototype/transforms/_presets.py:47: error: Incompatible types in assignment (expression has type "Tensor", variable has type "Image") [assignment] img = F.pil_to_tensor(img) ^~~~~~~~~~~~~~~~~~~~ torchvision/prototype/transforms/_presets.py:53: error: Incompatible types in assignment (expression has type "Tensor", variable has type "Image") [assignment] img = F.resize(img, self.resize_size, interpolation=se... ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... torchvision/prototype/transforms/_presets.py:53: error: Argument 1 to "resize" has incompatible type "Image"; expected "Tensor" [arg-type] img = F.resize(img, self.resize_size, interpolation=se... ^~~ torchvision/prototype/transforms/_presets.py:55: error: Incompatible types in assignment (expression has type "Tensor", variable has type "Image") [assignment] img = F.rgb_to_grayscale(img) ^~~~~~~~~~~~~~~~~~~~~~~ torchvision/prototype/transforms/_presets.py:55: error: Argument 1 to "rgb_to_grayscale" has incompatible type "Image"; expected "Tensor" [arg-type] img = F.rgb_to_grayscale(img) ^~~ torchvision/prototype/transforms/_presets.py:56: error: Argument 1 to "convert_image_dtype" has incompatible type "Image"; expected "Tensor" [arg-type] img = F.convert_image_dtype(img, torch.float) ^~~ torchvision/prototype/transforms/_presets.py:61: error: Argument 1 to "_process_image" has incompatible type "Tensor"; expected "Image" [arg-type] left_image = _process_image(left_image) ^~~~~~~~~~ torchvision/prototype/transforms/_presets.py:62: error: Argument 1 to "_process_image" has incompatible type "Tensor"; expected "Image" [arg-type] right_image = _process_image(right_image) ^~~~~~~~~~~ Found 54 errors in 8 files (checked 235 source files) Traceback (most recent call last): File "/home/ec2-user/actions-runner/_work/vision/vision/test-infra/.github/scripts/run_with_env_secrets.py", line 100, in <module> main() File "/home/ec2-user/actions-runner/_work/vision/vision/test-infra/.github/scripts/run_with_env_secrets.py", line 96, in main run_cmd_or_die(f"docker exec -t {container_name} /exec") File "/home/ec2-user/actions-runner/_work/vision/vision/test-infra/.github/scripts/run_with_env_secrets.py", line 38, in run_cmd_or_die raise RuntimeError(f"Command {cmd} failed with exit code {exit_code}") RuntimeError: Command docker exec -t 15f9ff9969cc0b84a72d38706de8fe5d73d407690d6a66b0420cb74dc17d8947 /exec failed with exit code 1 Error: Process completed with exit code 1.
The text was updated successfully, but these errors were encountered:
_functional_pil.py
video_reader.py
prototype.transforms.*
Successfully merging a pull request may close this issue.
Mypy is failing with a bunch of weird type issue like those below.
Not sure what's going on.
The text was updated successfully, but these errors were encountered: