-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor transforms v2 tests #7562
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this Philip. Made some minor comments but the overall logic / layout looks good to me so far
check_scripted_smoke=True, | ||
check_dispatch_simple_tensor=True, | ||
check_dispatch_pil=True, | ||
check_dispatch_datapoint=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we already know which of the transforms are going to set one of these to False?
I'm not sure about the rest but it sounds at least that check_dispatch_simple_tensor
and check_dispatch_datapoint
should always be True?
In many places we have parameters that we thought could be useful eventually, but ended up never being used and clutter the API. It' best to be conservative and only expose what we need when we need it (unless we're 100% sure we'll need it very very soon)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we already know which of the transforms are going to set one of these to False?
Yes, there are a few. You can see them by looking at the configs in transforms_v2_dispatcher_infos.py
that use the following skip marker.
vision/test/transforms_v2_dispatcher_infos.py
Lines 99 to 102 in fc83b28
skip_dispatch_datapoint = TestMark( | |
("TestDispatchers", "test_dispatch_datapoint"), | |
pytest.mark.skip(reason="Dispatcher doesn't support arbitrary datapoint dispatch."), | |
) |
I'm not sure about the rest but it sounds at least that
check_dispatch_simple_tensor
andcheck_dispatch_datapoint
should always be True?
Indeed, the statement above only applies to datapoints. To the best of my knowledge we don't have a dispatcher for that we can't automatically check simple tensor and PIL dispatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Philip, made another round but it looks good. We're close
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot Philip. I made a few more comments below but they're not super critical. LGTM.
|
||
def _make_input(self, input_type, *, dtype=None, device="cpu", **kwargs): | ||
if input_type in {torch.Tensor, PIL.Image.Image, datapoints.Image}: | ||
input = make_image(size=self.INPUT_SIZE, dtype=dtype or torch.uint8, device=device, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just pointing back to #7562 (comment) to make sure it wasn't missed. But we can leave it out for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wasn't missed, but we can deal with it along the way. Assuming that contributors are ok with just using these functions for now, I would postpone this in favor of velocity.
Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
New "benchmark" for this PR at ff4c0ea
|
After d5358b9 we can compare the time impact of this PR. PR
|
Hey @pmeier! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
Reviewed By: vmoens Differential Revision: D47186580 fbshipit-source-id: b5703d86d716c6eb804076ebb969fa23e20287b4 Co-authored-by: Nicolas Hug <nh.nicolas.hug@gmail.com> Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
While very comprehensive, the transforms v2 (functional) tests are also fairly complicated. Thus, it is hard to ramp up new contributors of the team or OSS contributors. @NicolasHug and I discussed offline and we agreed that we need to tackle this.
The question that remains is: is the complexity that we currently have actually needed or can we live with the easier test setup for transforms v1. Skimming through the v1 tests, we identified quite a few holes in what we have now. Thus, it seems that our current process of enforcing comprehensive tests through code review is not sufficient.
Still, we wondered if we can find a middle ground here. And this PR is a PoC for that.
Basically we want one test class per "operation", where operation here means the actual transform, the corresponding dispatcher, and all kernels. Each test should call one helper that comprehensively performs a number of checks.
cc @vfdev-5