Why does Halide allocate memory on CPU when wrapping a PyTorch CUDA tensor? #6285
Unanswered
twesterhout
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In
Halide::PyTorch::wrap
whentensor
is on CUDA a new tensor is allocated on the CPU:Halide/src/runtime/HalidePyTorchHelpers.h
Line 95 in da7c66e
Is there a way to avoid it?
Beta Was this translation helpful? Give feedback.
All reactions