-
Notifications
You must be signed in to change notification settings - Fork 7.1k
test_rgb2hsv is flaky #2433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Problems:
Inconsistency due to precision. Consider two-pixel values:
Only 6 is missing in the eight-decimal place. Colorsys fips on the Hue value. colorsys.rgb_to_hsv(0.3749333, 0.0530237, 0.0530237)
# (0.0, 0.85857831246251, 0.3749333)
colorsys.rgb_to_hsv(0.3749333, 0.0530237, 0.05302376)
# (0.9999999689353781, 0.85857831246251, 0.3749333) _rgb2hsv gives the same result for both. Why the test failsx[7902]
# tensor([0.3749333024024963, 0.0530236959457397, 0.0530237555503845])
x[7902].numpy()
# array([0.3749333 , 0.053023696, 0.053023756], dtype=float32) All the precision is lost when converting to numpy. The pytorch tensor is initialized as torch.float32 but is not being converted exactly to numpy.float32. I think it is one of those boundary floating-point cases. A possible solution for this would be to ignore the error if the difference is one. A more complex solution would be to get the index of the max_diff value, and if those values are 0.0 and 0.99, ignore the error. |
I tried some more things. x
# tensor([0.3749333024024963, 0.0530236959457397, 0.0530237555503845]) When I convert this to Numpy x.numpy()
# array([0.3749333 , 0.053023696, 0.053023756], dtype=float32) The precision is lost. The interesting thing now is, if I convert this value back to tensor the precision comes back torch.from_numpy(x.numpy())
# tensor([0.3749333024024963, 0.0530236959457397, 0.0530237555503845]) I again tried the above. But I did it more explicitly. I define a numpy array with value as below a = np.array([0.3749333], dtype=np.float32)
a
# array([0.3749333], dtype=float32) Now when I convert this to tensor, I get all those extra decimals. torch.from_numpy(a)
# tensor([0.3749333024024963]) There is something wrong with pytorch numpy conversion. It fails for all values. a = np.array([0.1], dtype=np.float32)
a
# array([0.1], dtype=float32)
torch.from_numpy(a)
# tensor([0.1000000014901161]) @fmassa do you know how pytorch handles numpy conversion? |
It is pytorch float issuePytorch is acting weird. Nothing is wrong with numpy conversion as shown below. torch.tensor([0.1], dtype=torch.float32)
# tensor([0.1000000014901161]) The above works correctly in Numpy. I tested this on Pytorch master + Pytorch 1.5.1. Is it a known pytorch thing? Should I open a new issue on PyTorch? Or I do not know something about PyTorch? |
I think the issue might be that we are performing computations on float32, while numpy and python are doing those in float64? I think that the best fix for this is to change the test so that it takes into account the fact that the For example, instead of doing dist_h = (x_h - y_h).abs().max() we could instead do something like dist_h = ((x_h * 2 * math.pi).sin() - (y_h * 2 * math.pi).sin()).abs().max() so that 0 and |
It looks like
test_rgb2hsv
is flaky and fails sometimes.https://app.circleci.com/pipelines/github/pytorch/vision/3217/workflows/77e60582-2ddc-46db-933f-33c45c27387c/jobs/178179/tests
Example error that we get
The text was updated successfully, but these errors were encountered: