Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[proto] Fixed RandAug and all AA consistency tests #6519

Merged
merged 5 commits into from
Aug 31, 2022

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented Aug 30, 2022

In a follow-up PR, we can port consistency tests into Philip's framework

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing logic is quite complicated and feels very brittle given the number of patches and RNG alignments. I guess it is fine for now given that these are only consistency checks and we are going to remove them when we roll-out the API.

Still, do you think it would be easier to revert some of the changes in the prototype AA transforms to have the same random parameter generation? In #6522 I found that for all other transformations, just setting the random seed is sufficient for them to be tested in the automatic framework I designed.

@@ -378,8 +378,12 @@ def affine_segmentation_mask(


def _convert_fill_arg(fill: Optional[Union[int, float, Sequence[int], Sequence[float]]]) -> Optional[List[float]]:
# Fill = 0 is not equivalent to None, https://github.com/pytorch/vision/issues/6517
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to change this in the stable transforms as well? I would feel a lot more confident in this patch if the stable CI confirms this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stable is using fill=None everywhere where I could spotted the difference, so that's why there is no change there.
However, we may want to fix the current inconsistency in stable with fill=None != fill=0 (#6517).

@@ -549,8 +549,8 @@ def _apply_grid_transform(img: Tensor, grid: Tensor, mode: str, fill: Optional[L

# Append a dummy mask for customized fill colors, should be faster than grid_sample() twice
if fill is not None:
dummy = torch.ones((img.shape[0], 1, img.shape[2], img.shape[3]), dtype=img.dtype, device=img.device)
img = torch.cat((img, dummy), dim=1)
mask = torch.ones((img.shape[0], 1, img.shape[2], img.shape[3]), dtype=img.dtype, device=img.device)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, dummy is indeed not a good name.

@@ -340,19 +359,17 @@ def forward(self, *inputs: Any) -> Any:
sample = inputs if len(inputs) > 1 else inputs[0]

id, image = self._extract_image(sample)
num_channels, height, width = get_chw(image)
_, height, width = get_chw(image)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, but weird that flake8 didn't complain about an unused variable.

expected_output = pil_to_tensor(expected_output)
output = pil_to_tensor(output)

torch.testing.assert_close(expected_output, output)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we use assert_equal, i.e. assert_close(..., rtol=0, atol=0) for the consistency checks? Any deviation means there is a difference that we don't want. Since we only compare tensor to tensor and PIL to PIL, the usual needed tolerances are probably not needed here.

In the framework I've added for consistency tests, I also added

assert_equal = functools.partial(_assert_equal, pair_types=[ImagePair], rtol=0, atol=0)

that handles PIL and tensor images simultaneously.

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Aug 30, 2022

Still, do you think it would be easier to revert some of the changes in the prototype AA transforms to have the same random parameter generation?

yes, maybe. On the other hand I would not say that prototype logic is badly coded, I like how it was done.

I guess it is fine for now given that these are only consistency checks and we are going to remove them when we roll-out the API.

I agree with that, so we can keep those weird tests as we'll remove them later.

@vfdev-5 vfdev-5 merged commit 4cb83c2 into pytorch:main Aug 31, 2022
@vfdev-5 vfdev-5 deleted the proto-fix-aa-and-tests branch August 31, 2022 08:29
facebook-github-bot pushed a commit that referenced this pull request Sep 9, 2022
Summary:
* [proto] Fixed RandAug implementation

* Fixed randomness in tests for trivial aug

* Fixed all AA tests

Reviewed By: YosuaMichael

Differential Revision: D39381947

fbshipit-source-id: 9c0cdd8b251b20d3d01227645caf62ab68a4abd9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants