__cuda_array_interface__ conversion does not support readonly arrays #32868
Labels
module: cuda
Related to torch.cuda, and CUDA support in general
module: internals
Related to internal abstractions in c10 and ATen
module: numba
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
PyTorch does not support importing readonly GPU arrays via
__cuda_array_interface__
:https://numba.pydata.org/numba-doc/dev/cuda/cuda_array_interface.html
pytorch/torch/csrc/utils/tensor_numpy.cpp
Line 292 in 2471ddc
To Reproduce
If you have a copy of jax and jaxlib with GPU support built from head, the following will reproduce:
(Aside: this is not a PyTorch bug, but curiously CuPy drops the readonly flag, so you can make the import "work" by laundering the array through CuPy:
Expected behavior
PyTorch should support the readonly flag.
cc @ngimel
The text was updated successfully, but these errors were encountered: