-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[proto] Fixed RandAug and all AA consistency tests #6519
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing logic is quite complicated and feels very brittle given the number of patches and RNG alignments. I guess it is fine for now given that these are only consistency checks and we are going to remove them when we roll-out the API.
Still, do you think it would be easier to revert some of the changes in the prototype AA transforms to have the same random parameter generation? In #6522 I found that for all other transformations, just setting the random seed is sufficient for them to be tested in the automatic framework I designed.
@@ -378,8 +378,12 @@ def affine_segmentation_mask( | |||
|
|||
|
|||
def _convert_fill_arg(fill: Optional[Union[int, float, Sequence[int], Sequence[float]]]) -> Optional[List[float]]: | |||
# Fill = 0 is not equivalent to None, https://github.com/pytorch/vision/issues/6517 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need to change this in the stable transforms as well? I would feel a lot more confident in this patch if the stable CI confirms this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stable is using fill=None
everywhere where I could spotted the difference, so that's why there is no change there.
However, we may want to fix the current inconsistency in stable with fill=None
!= fill=0
(#6517).
@@ -549,8 +549,8 @@ def _apply_grid_transform(img: Tensor, grid: Tensor, mode: str, fill: Optional[L | |||
|
|||
# Append a dummy mask for customized fill colors, should be faster than grid_sample() twice | |||
if fill is not None: | |||
dummy = torch.ones((img.shape[0], 1, img.shape[2], img.shape[3]), dtype=img.dtype, device=img.device) | |||
img = torch.cat((img, dummy), dim=1) | |||
mask = torch.ones((img.shape[0], 1, img.shape[2], img.shape[3]), dtype=img.dtype, device=img.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, dummy
is indeed not a good name.
@@ -340,19 +359,17 @@ def forward(self, *inputs: Any) -> Any: | |||
sample = inputs if len(inputs) > 1 else inputs[0] | |||
|
|||
id, image = self._extract_image(sample) | |||
num_channels, height, width = get_chw(image) | |||
_, height, width = get_chw(image) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, but weird that flake8
didn't complain about an unused variable.
expected_output = pil_to_tensor(expected_output) | ||
output = pil_to_tensor(output) | ||
|
||
torch.testing.assert_close(expected_output, output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we use assert_equal
, i.e. assert_close(..., rtol=0, atol=0)
for the consistency checks? Any deviation means there is a difference that we don't want. Since we only compare tensor to tensor and PIL to PIL, the usual needed tolerances are probably not needed here.
In the framework I've added for consistency tests, I also added
assert_equal = functools.partial(_assert_equal, pair_types=[ImagePair], rtol=0, atol=0) |
that handles PIL and tensor images simultaneously.
yes, maybe. On the other hand I would not say that prototype logic is badly coded, I like how it was done.
I agree with that, so we can keep those weird tests as we'll remove them later. |
Summary: * [proto] Fixed RandAug implementation * Fixed randomness in tests for trivial aug * Fixed all AA tests Reviewed By: YosuaMichael Differential Revision: D39381947 fbshipit-source-id: 9c0cdd8b251b20d3d01227645caf62ab68a4abd9
In a follow-up PR, we can port consistency tests into Philip's framework