-
Notifications
You must be signed in to change notification settings - Fork 751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch tensor factory methods should use torch namespace instead of at #1197
Comments
I see, thanks for the information! Are all those factory functions the ones found in "torch/csrc/autograd/generated/variable_factories.h"? |
I'm not too familiar with the PyTorch codebase yet but yes that looks right to me.
I can only speak for my use-case where it would be totally fine. Since I'm building an idiomatic Scala API on top of the JavaCPP PyTorch bindings, I'm wrapping these method calls anyway. |
…` as prefix in presets for PyTorch (issue #1197)
Done in commit f093384! Please give it a try with the snapshots: http://bytedeco.org/builds/ |
I just did a quick test after switching to the Looking good! Thanks! And also thanks for updating to PyTorch 1.12. I haven't tested all factory methods yet but I think we can close this issue and reopen or open a new one if something is still missing. |
This fix has been released as part of version 1.5.8. Thanks for reporting and for testing! |
Many tensor factory methods defined in torch.java get mapped to functions in the
at
namespace (from the aTen tensor library underlying PyTorch) instead of thetorch
namespace.See
rand
for instance but this is the case for most factory functions and possibly others as well.Usually though, we want to call factory functions in the
torch
namespace because only they give us things like variables and autodiff. I.e.requires_grad
does not work on factory methods from aTen.This is also stated in the docs:
https://pytorch.org/cppdocs/#c-frontend
https://pytorch.org/cppdocs/notes/faq.html#i-created-a-tensor-using-a-function-from-at-and-get-errors
One thing we might have to consider is backward-compatibility, i.e. by using different names for colliding functions in the torch namespace.
The text was updated successfully, but these errors were encountered: