-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
to_device should not copy when the operation is a no op #645
Comments
I think this requires a clarification but not a new keyword. The case of "what if the given device to copy to is the current device" isn't covered in the current text, however I'd expect it to be done with the natural implementation (return self). In the whole design of the standard we do not allow modifying aliased memory, so it's fully equivalent to return self vs. making a data copy in memory. And making a data copy is an execution detail, so it's clear we should never specify such a thing. |
#626 (comment) discussed the consideration of adding |
The spec for
to_device()
says:It doesn't say anything about the case where the given array is already on the specified device.
Note that PyTorch's
to
has acopy
flag, which isFalse
by default, https://pytorch.org/docs/stable/generated/torch.Tensor.to.html, which prevents copying when the device is the same. to_device should probably behave the same. We may or may not want to add acopy
flag toto_device
as well (but note that you can control this explicitly usingasarray
).This came up thinking about this code in scikit-learn https://github.com/scikit-learn/scikit-learn/pull/26315/files/42524bd42900d8ea5f4a334780387a72c6f9580d#diff-86c94a3ca33490c6190f488f5d40b01bf0fd29be36da0b4497ef0da1fda4148a.
The text was updated successfully, but these errors were encountered: