-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TODOs before 0.15 release #7217
Comments
This was referenced Feb 10, 2023
vfdev-5
added a commit
to vfdev-5/vision
that referenced
this issue
Feb 13, 2023
Linking #7056 for visibility |
vfdev-5
added a commit
to vfdev-5/vision
that referenced
this issue
May 4, 2023
Description: - Now that pytorch/pytorch#90771 is merged, let Resize() rely on interpolate()'s native uint8 handling instead of converting to and from float. - uint8 input is not casted to f32 for nearest mode and bilinear mode if the latter has AVX2. Context: pytorch#7217 Benchmarks: ``` [----------- Resize cpu torch.uint8 InterpolationMode.NEAREST -----------] | resize v2 | resize stable | resize nightly 1 threads: --------------------------------------------------------------- (3, 400, 400) | 457 | 461 | 480 (16, 3, 400, 400) | 6870 | 6850 | 10100 Times are in microseconds (us). [---------- Resize cpu torch.uint8 InterpolationMode.BILINEAR -----------] | resize v2 | resize stable | resize nightly 1 threads: --------------------------------------------------------------- (3, 400, 400) | 326 | 329 | 844 (16, 3, 400, 400) | 4380 | 4390 | 14800 Times are in microseconds (us). ``` [Source](https://gist.github.com/vfdev-5/a2e30ed50b5996807c9b09d5d33d8bc2)
vfdev-5
added a commit
to vfdev-5/vision
that referenced
this issue
May 9, 2023
Description: - Now that pytorch/pytorch#90771 is merged, let Resize() rely on interpolate()'s native uint8 handling instead of converting to and from float. - uint8 input is not casted to f32 for nearest mode and bilinear mode if the latter has AVX2. Context: pytorch#7217 Benchmarks: ``` [----------- Resize cpu torch.uint8 InterpolationMode.NEAREST -----------] | resize v2 | resize stable | resize nightly 1 threads: --------------------------------------------------------------- (3, 400, 400) | 457 | 461 | 480 (16, 3, 400, 400) | 6870 | 6850 | 10100 Times are in microseconds (us). [---------- Resize cpu torch.uint8 InterpolationMode.BILINEAR -----------] | resize v2 | resize stable | resize nightly 1 threads: --------------------------------------------------------------- (3, 400, 400) | 326 | 329 | 844 (16, 3, 400, 400) | 4380 | 4390 | 14800 Times are in microseconds (us). ``` [Source](https://gist.github.com/vfdev-5/a2e30ed50b5996807c9b09d5d33d8bc2)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is a meta-issue to keep track of all the things we need to do before the branch cut / release. Core branch cut is planned for Monday
1315th.This superseeds #7092 and includes stuff unrelated to transformsV2. @pmeier @vfdev-5 please feel free to directly edit this issue with updates, and add anything I may have missed
Before branch cut - Friday 17th
Transforms V2 Stuff
fill
parameter compatibilty with torchscript Fix some annotations in transforms v2 for JIT v1 compatibility #7252 @pmeierImage(some_pil_image)
work (description: Compatibility layer between stable datasets and prototype transforms #6663 (comment)), @vfdev-5 @pmeier, Image and Mask can accept PIL images #7231convert_dtype
https://github.com/pytorch/vision/pull/7092/files#diff-0c3dc07ddb80b9a83bf674efe1dcaee6aaabab59210d4916d2e35536c498710aR441torchvision.transforms.v2
(or whatever else if we change our mind) Rollout planning for transforms v2 #7097 @pmeier @vfdev-5 @NicolasHug Promote prototype transforms to beta status #7261Other stuff
If time allows
None
as option toantialias
Resize()
rely oninterpolate()
's nativeuint8
handling instead of converting to and fromfloat
.format
andspatial_size
are adequate meta-data names for bboxes https://github.com/pytorch/vision/pull/7092/files#r1071959354BoundingBox
instance is present per sample. Some transforms enforce that, some don't. It could simplify some checks - sort of related: https://github.com/pytorch/vision/pull/7092/files#r1071600864datapoints.FillType
and the others private @pmeier make type alias private #7266After branch cut, before release
Just after release
The text was updated successfully, but these errors were encountered: