Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding AugMix implementation #5411

Merged
merged 12 commits into from
Feb 18, 2022
Merged

Conversation

datumbox
Copy link
Contributor

@datumbox datumbox commented Feb 11, 2022

Adding the AugMix data augmentation method. Inspired from the work on the official repo.

@facebook-github-bot
Copy link

facebook-github-bot commented Feb 11, 2022

💊 CI failures summary and remediations

As of commit ecc598e (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@datumbox datumbox mentioned this pull request Feb 11, 2022
24 tasks
Copy link
Contributor Author

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@normster I would love your input on the implementation since you are one of the key authors. Your feedback is welcome in all parts of the PR, but I specifically flagged a couple of places within the implementation to get your input below.

Thanks in advance!

torchvision/transforms/autoaugment.py Show resolved Hide resolved
torchvision/transforms/autoaugment.py Show resolved Hide resolved
torchvision/transforms/autoaugment.py Show resolved Hide resolved
torchvision/transforms/autoaugment.py Show resolved Hide resolved
torchvision/transforms/autoaugment.py Show resolved Hide resolved
torchvision/transforms/autoaugment.py Outdated Show resolved Hide resolved
torchvision/transforms/autoaugment.py Outdated Show resolved Hide resolved
torchvision/transforms/autoaugment.py Show resolved Hide resolved
@hendrycks
Copy link
Contributor

In later data augmentation papers such as PixMix, we used all of these PIL augmentations. Consequently I think the PyTorch vision AugMix implementation could use these too.
[
autocontrast, equalize, posterize, rotate, solarize, shear_x, shear_y,
translate_x, translate_y, color, contrast, brightness, sharpness
]

@datumbox
Copy link
Contributor Author

@hendrycks Thanks for the input!

In the current implementation, we support it and make it optional to use the extra 4 transforms by setting all_ops=True on the constructor. Check this #5411 (comment)

Let me know if you have any other feedback on the implementation. The main changes are on the autoaugment.py file of this PR.

@datumbox datumbox changed the title [WIP] Adding AugMix implementation Adding AugMix implementation Feb 16, 2022
@datumbox
Copy link
Contributor Author

@normster @hendrycks We intend to merge soon. If you have any thoughts on the comments above let us know, thanks!

Copy link

@normster normster left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! I had a small question about per-image/per-batch processing that I noted inline but what you have should work fine in practice.

@datumbox
Copy link
Contributor Author

@normster Thanks a lot for the feedback. In relation to the sampling of weights, I will benchmark your proposal. Out of curiosity, did you do any experiments on the sampling per image vs batch?

@datumbox datumbox requested a review from vfdev-5 February 18, 2022 13:18
@datumbox
Copy link
Contributor Author

I spoke offline with Norman and he mentioned they didn't do experiments with using the same weight for the entire batch. So to align with the official implementation, I just updated the code to sample one weight per image in the batch.

torchvision/transforms/autoaugment.py Outdated Show resolved Hide resolved
Comment on lines +545 to +553
fill = self.fill
if isinstance(orig_img, Tensor):
img = orig_img
if isinstance(fill, (int, float)):
fill = [float(fill)] * F.get_image_num_channels(img)
elif fill is not None:
fill = [float(f) for f in fill]
else:
img = self._pil_to_tensor(orig_img)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later we may want to refactor this part of code that could be applicable to all other aug strategies...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly. We would need to consider how Videos can be handled as well here too.

Co-authored-by: vfdev <vfdev.5@gmail.com>
Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @datumbox !

@datumbox
Copy link
Contributor Author

Though the official repo used a default severity of 1, I've decided to change our default value to 3. This aligns the "intensity" of the transform with other methods such as RandAugment (uses 9/31). This value seems to be favoured by other implementations as well, so it's worth changing. The users are able to overwrite the value by just passing their choice.

@datumbox datumbox merged commit 48a61df into pytorch:main Feb 18, 2022
@datumbox datumbox deleted the transforms/augmix branch February 18, 2022 16:24
facebook-github-bot pushed a commit that referenced this pull request Feb 25, 2022
Summary:
* Adding basic augmix implementation.

* Finish the implementation.

* Add tests and documentation.

* Fix tests.

* Simplify code.

* Speed optimizations.

* Per image weights instead of per batch.

* Fix tests.

* Update torchvision/transforms/autoaugment.py

* Changing the default severity value to get by default the same strength as RandAugment.

Reviewed By: jdsgomes

Differential Revision: D34475319

fbshipit-source-id: 4637ad23deace03cf1f96b5c19a310c360f179d5

Co-authored-by: vfdev <vfdev.5@gmail.com>
Co-authored-by: vfdev <vfdev.5@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants