We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This happens intermittently with CUDA tensors, but never with CPU tensors:
>>> x = torch.rand(3, 4, 5).cuda() >>> x.sum() 26.833271026611328 >>> x.transpose(0, 1).sum() 26.83327293395996
This happens both on a recent version of PyTorch (0.4.0a0+4970e73) and 0.3.0.
Is this expected behavior?
The text was updated successfully, but these errors were encountered:
Yes, it's a result of floating point math being non-associative. In the second case the elements will be added in a different order.
Sorry, something went wrong.
No branches or pull requests
This happens intermittently with CUDA tensors, but never with CPU tensors:
This happens both on a recent version of PyTorch (0.4.0a0+4970e73) and 0.3.0.
Is this expected behavior?
The text was updated successfully, but these errors were encountered: