-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change default of antialias parameter from None to 'warn' #7160
Change default of antialias parameter from None to 'warn' #7160
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torchvision/transforms/functional.py
Outdated
# TODO in v0.17: remove this helper and change default of antialias to True everywhere | ||
def _check_antialias(antialias: Optional[Union[str, bool]], interpolation: InterpolationMode) -> Optional[bool]: | ||
if isinstance(antialias, str): # it should be "warn", but we don't bother checking against that | ||
# TODO Should we only warn if input is a tensor, or for PIL images too? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would only warn for tensors. Since the value is, and will be, ignored for PIL and we already warn that it is passed in the first place
vision/torchvision/transforms/functional.py
Lines 474 to 476 in 135a0f9
if not isinstance(img, torch.Tensor): | |
if antialias is not None and not antialias: | |
warnings.warn("Anti-alias option is always applied for PIL Image input. Argument antialias is ignored.") |
the warning here is irrelevant for the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, we're only warning for Tensors now. One minor caveat is that since we can't warn in __init__()
of Resize()
anymore, the warning will only happen during forward()
. I assume that's fine? Right now, I tend to prefer that to the alternative of warning for PIL users, who shouldn't have to be bothered about all this
# We manually set it to False to avoid an error downstream in interpolate() | ||
# This behaviour is documented: the parameter is irrelevant for modes | ||
# that are not bilinear or bicubic. We used to raise an error here, but | ||
# now we don't as True is the default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used to raise an error, now we silently disable antialias. Otherwise, we'd be breaking those use-cases when we switch the default to True in 2 versions:
Resize(interpolation=NEAREST)(img) # uh oh, default changed to True, which raises an error!
This is a slight change of behaviour (not a change of results though!) and it's not perfect, but I feel like the alternative is worse: we could introduce another value for antialias
e.g. "auto"
, which would mean "Do what PIL does i.e. apply antialias for bilinear and bicubic, and no antialias otherwise". But having that + True + False + None will bloat the API IMHO.
If it makes it any better, remember that PIL already silently switches antialiasing on or off depending on the interpolation mode (granted, they don't have an antialias
parameter so it's not a completely fair comparison).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
models.detection.FasterRCNN_ResNet50_FPN_Weights.DEFAULT.transforms()(img) | ||
models.video.R3D_18_Weights.DEFAULT.transforms()(img) | ||
models.optical_flow.Raft_Small_Weights.DEFAULT.transforms()(img, img) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For these 3, AFAICT, Resize()
is either not used in the evaluation presets, or the training was done on Tensors already (i.e. no interpolation was done anyway, in which case I hard-coded antialias=False
and added comments in the code to explain why).
|
||
|
||
# TODO: remove this test in 0.17 when the default of antialias changes to True | ||
def test_antialias_warning(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm mixing the functional and class tests here... Hope that's fine (I found it to be a awkward to scatter them across so many files as done in the V1 tests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine by me. Test is self-contained enough that there shouldn't be any confusion about it later.
I just confirmed on the references that the behaviour looks fine. I'm getting warnings when relevant (only on tensors) and the accuracy boost is consistent with #6506 (I only checked |
if not isinstance(img, torch.Tensor): | ||
if antialias is not None and not antialias: | ||
warnings.warn("Anti-alias option is always applied for PIL Image input. Argument antialias is ignored.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that we have four possible values, should we refactor these checks? They are quite hard to read TBH:
antialias |
a = antialias is not None |
b = not antialias |
a and b |
---|---|---|---|
None |
False |
True |
False |
False |
True |
True |
True |
True |
True |
False |
False |
"warm" |
True |
False |
False |
Unless I'm missing something, that should be equivalent to antialias is False
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, is False
is clearer, althought I avoided doing this in this PR because it's not strictly needed for the original purpose, and the PR is already pretty long. I don't mind doing if you think it's less confusing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't care if we do it here or in a follow-up, but we should do it. I had to build the table above to see if the new case of "warn"
does something unexpected and that is not a good sign for an otherwise simple check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pmeier antialias="warn"
should behave as antialias=None
, so, I would say there is no 4th line in the table (also values of the 4th line seem to be incorrect).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the result is the same, but the way it is achieved is different. What do you think is wrong?
>>> antialias = "warn"
>>> antialias is not None
True
>>> not antialias
False
>>> antialias is not None and not antialias
False
resized_tensor = F.resize(tensor, size=size, interpolation=interpolation, max_size=max_size) | ||
resized_pil_img = F.resize(pil_img, size=size, interpolation=interpolation, max_size=max_size) | ||
resized_tensor = F.resize(tensor, size=size, interpolation=interpolation, max_size=max_size, antialias=True) | ||
resized_pil_img = F.resize(pil_img, size=size, interpolation=interpolation, max_size=max_size, antialias=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: As we now doing antialias=True
, we can also reduce the tol
below now:
_assert_approx_equal_tensor_to_pil(resized_tensor_f, resized_pil_img, tol=8.0)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I'll merge now to unblock, and I'll open a PR to check the CI on a lower tol
EDIT: #7233
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK to me, thanks @NicolasHug
I left few nit picking comments but they can be also ignored.
…7160) Summary: Reviewed By: vmoens Differential Revision: D44416275 fbshipit-source-id: 916691a68545d0b487d9ac20b4a8f42ec42315b6 Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: vfdev <vfdev.5@gmail.com>
Before changing the default to True in 2 versions
Closes #7093
cc @vfdev-5