Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extract make_* functions out of make_*_loader #7717

Merged
merged 20 commits into from
Jul 5, 2023
Merged

Conversation

pmeier
Copy link
Collaborator

@pmeier pmeier commented Jul 3, 2023

Our current make_* functions are thin wrappers around the corresponding make_*_loader function. The latter is only needed for the "deprecated" v2 tests. Thus, we invert the logic here and create a standalone make_* function that has a corresponding make_*_loader function that depends on it.

@pytorch-bot
Copy link

pytorch-bot bot commented Jul 3, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/7717

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures

As of commit 519f0fa:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

test/common_utils.py Outdated Show resolved Hide resolved
Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Philip, took a quick glance as discussed offline

test/common_utils.py Outdated Show resolved Hide resolved
test/common_utils.py Outdated Show resolved Hide resolved
@pmeier pmeier mentioned this pull request Jul 3, 2023
@pmeier pmeier marked this pull request as ready for review July 3, 2023 19:35
test/common_utils.py Outdated Show resolved Hide resolved
test/common_utils.py Outdated Show resolved Hide resolved
test/common_utils.py Show resolved Hide resolved
test/common_utils.py Outdated Show resolved Hide resolved
test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
@pmeier
Copy link
Collaborator Author

pmeier commented Jul 5, 2023

Regarding removing make_input: I've played around with it and it works mostly fine. The only thing that will be harder is the datapoint / simple tensor / PIL Image handling. Currently this is logic that we have inside make_input:

if input_type in {torch.Tensor, PIL.Image.Image, datapoints.Image}:
input = make_image(size=spatial_size, dtype=dtype or torch.uint8, device=device, **kwargs)
if input_type is torch.Tensor:
input = input.as_subclass(torch.Tensor)
elif input_type is PIL.Image.Image:
input = F.to_image_pil(input)

Since we need this somehow, one possibility is to add two new functions make_image_tensor and make_image_pil that are thin wrappers around make_image, which produces a datapoint in line with all the other "regular" make_* functions.

Another option would be to add a return_type parameter to make_image and inside our tests just functools.partial(make_image, return_type=...).

Regardless of which option we choose, if we remove make_input we are no longer parametrizing over input_type, but rather something like

@pytest.mark.parametrize(
    ("kernel", "make_input"),
    [
        # TODO: add simple tensor / PIL image handling
        (F.resize_image_tensor, make_image),
        (F.resize_bounding_box, make_bounding_box),
        (F.resize_mask, make_segmentation_mask),
        (F.resize_video, make_video),
    ],
)

That should work fine in general, but I already found one instance in the tests ported so far, that branch on the input_type:

def test_interpolation_int(self, interpolation, input_type):
# `InterpolationMode.NEAREST_EXACT` has no proper corresponding integer equivalent. Internally, we map it to
# `0` to be the same as `InterpolationMode.NEAREST` for PIL. However, for the tensor backend there is a
# difference and thus we don't test it here.
if issubclass(input_type, torch.Tensor) and interpolation is transforms.InterpolationMode.NEAREST_EXACT:
return

If we go for the standalone functions, we can still do this although it looks a little quirky

if make_input is make_image_tensor and interpolation is transforms.InterpolationMode.NEAREST_EXACT:
    ...

However, if we use functools.partial that is not possible.

Admittedly, the branching above is uncommon and I don't expect significantly more of this pattern in the future. Meaning, we could just special case it and parametrize over the make_* function and the input_type. Still, it begs the question if make_input in fact has a reason to exist. Thoughts?

@NicolasHug
Copy link
Member

Thanks for investigating Philip

Admittedly, the branching above is uncommon and I don't expect significantly more of this pattern in the future

If that's the case then I guess we can safely get rid of it

@pmeier
Copy link
Collaborator Author

pmeier commented Jul 5, 2023

Re size vs. spatial_size: we had a long offline conversation about and came to the following conclusions:

  • size is a valid term for bounding boxes. It describes the size of the bounding box, i.e.

    • (box[2] - box[0], box[3] - box[1]) for XYXY
    • (W, H) for XYWH and CXCYWH

    It is unrelated to the tensor shape. That is different from all the other datapoints that we currently have, where the size is directly encoded in the shape.

  • We still need a term to describe:

    • a) the size of the image, video, mask (and of a bbox, as above)
    • b) the corresponding "image size" of a bounding box. Out of lack for a better term, we agreed to keep spatial_size for now.
      We need a) and b) to be different because they refer to different things. So far, spatial_size has been used to refer to a) and b), but we've agreed to keep size for a) and only use spatial_size for b) within the tests.
  • The make_* functions don't need to know about spatial_size in general, but only make_bounding_box needs it. We can add a size parameter to it and sample a spatial_size around it if none is given. Meaning, all make_* functions will have a size parameter and that is what we are going to use going forward.

    • Note that we still need make_bounding_box to accept a spatial_size parameter, for cases when we want to generate a sample with a joint image and bbox. The image's size corresponds to the bbox's spatial_size (which itself has a size of its own, which we'll randomly generate to be smaller than spatial_size).

      image = make_image(size=img_size)
      box = make_bounding_box(spatial_size=img_size)

(EDIT: A few edits from @NicolasHug as discussed offline)

test/test_transforms_v2_refactored.py Outdated Show resolved Hide resolved
Comment on lines +264 to +266
closeness_kwargs={
(("TestKernels", "test_against_reference"), torch.int64, "cpu"): dict(atol=1, rtol=0),
},
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we were lucky before that we never hit that. Converting to or from CXCYWH with integer dtypes can lead to rounding errors. Now that we have refactored make_bounding_box, we are seeing two of those errors in this test.

@pmeier
Copy link
Collaborator Author

pmeier commented Jul 5, 2023

The diff in this PR grew pretty large so here is a summary of the changes:

  • Factor out the make_* functions out of the corresponding make_*_loader
    • Since we still have v2 tests that rely on the loader architecture, these functions need to stay for now. However, if we are done porting, we can remove them as well as all supporting classes and functions.
  • Remove the "random" option for the size of the generated inputs. This amounts for most of the changes in test/transforms_v2_kernel_infos.py
  • Add a size parameter to make_bounding_box
  • Remove make_input in favor of parametrizing over the low-level functions

@pmeier pmeier requested a review from NicolasHug July 5, 2023 12:42
Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Philip

@pmeier
Copy link
Collaborator Author

pmeier commented Jul 5, 2023

Test failures are real

@pmeier
Copy link
Collaborator Author

pmeier commented Jul 5, 2023

Test failures stem likely from a bad reference function. I can reproduce the same error on main when using the bounding box created by this PR. However, the failing test is actually a correctness test that should have been removed in #7713, but was forgotten. I acted on that in 519f0fa and thus there is no need to fix any test.

@pmeier pmeier merged commit 23b0938 into pytorch:main Jul 5, 2023
@pmeier pmeier deleted the port/make branch July 5, 2023 20:20
@pmeier pmeier mentioned this pull request Jul 11, 2023
@pmeier pmeier mentioned this pull request Jul 27, 2023
facebook-github-bot pushed a commit that referenced this pull request Aug 25, 2023
Reviewed By: matteobettini

Differential Revision: D48642276

fbshipit-source-id: b60d934f09cb1fb4678101bdba2eef4b4adefa89
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants