-
Notifications
You must be signed in to change notification settings - Fork 22.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds 'clip' alias for clamp #42770
Adds 'clip' alias for clamp #42770
Conversation
💊 CI failures summary and remediationsAs of commit 3747f18 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 1 failure confirmed as flaky and can be ignored:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 21 times. |
min_val = -1 | ||
max_val = 1 | ||
m1[1] = min_val | ||
m1[2] = max_val |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what are you trying to achieve here? Even if you are extremely lucky and your input tensor is bound by [min_val, max_val], these assignments are not going to change anything, and output will be equal to input.
res1 = m1.clone() | ||
inplace_op(res1, min_val, max_val) | ||
res2 = m1.clone() | ||
for i in iter_indices(res2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe compare_with_numpy instead of this loop?
res1 = test_tens.clone() | ||
inplace_op(res1, min_val, max_val) | ||
res2 = test_tens.clone() | ||
for i in iter_indices(res2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this loop not needed, just compare with expected [nan] here
res2[i] = max(min(res2[i], max_val), min_val) | ||
self.assertEqual(torch.isnan(res1), torch.isnan(res2)) | ||
|
||
out = test_tens.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given that you expect out
to be equal to test_tens
, this is not the best way to initialize out
|
||
res1 = op(test_tens, min=min_val) | ||
res2 = test_tens.clone() | ||
for i in iter_indices(res2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
loop is not needed
|
||
error_msg = 'At least one of \'min\' or \'max\' must not be None' | ||
with self.assertRaisesRegex(RuntimeError, error_msg): | ||
method_op(m1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see method_op tested anywhere other than here where it raises an error
with self.assertRaisesRegex(RuntimeError, error_msg): | ||
method_op(m1) | ||
with self.assertRaisesRegex(RuntimeError, error_msg): | ||
inplace_op(m1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is nan propagation in clamp tested anywhere else? Vectorized and non-vectorized paths (could) propagate nan
differently, so testing 1-element and 32-element tensors is needed to make sure that everything is ok.
@@ -1497,6 +1496,12 @@ def merge_dicts(*dicts): | |||
tensor([ 0.5000, -0.4702, -0.4599, 0.5000]) | |||
""".format(**common_args)) | |||
|
|||
add_docstr(torch.clip, r""" | |||
clip(input, min, max, *, out=None) -> Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is there "*" here but not in clamp
declaration?
@@ -276,6 +276,8 @@ def method_tests(): | |||
('clamp', (), (None, 0.5), 'min_scalar', (True,)), | |||
('clamp', (), (0.5, None), 'max_scalar', (True,)), | |||
('clamp', (S, S), (), 'max_scalar_kwarg', (True,), (), (), ident, {'max': 1}), | |||
('clip', (S, S, S), dont_convert((0, 1)), '', (False,)), | |||
('clip_', (S, S, S), dont_convert((0, 1)), '', (False,)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before your additions of absolute_
and clip_
looks like inplace versions weren't tested here, so maybe they shouldn't be?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alias test requires an inplace entry.
Note: per offline review, improvements to the existing clamp test will be separated and implemented when it's ported to the forthcoming test_unary_ufuncs.py. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Per title. Also updates our guidance for adding aliases to clarify interned_string and method_test requirements. The alias is tested by extending test_clamp to also test clip.