-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
permutedims errors on high dimensional tensors #333
Comments
I fixed the above error. The good news is it does not error on 16-18 dimensional arrays anymore. The bad news is we got a new error
When exiting julia REPL, I see the following error
This error message is very similar to I have submitted a WIP PR #334 to better inspect this issue. |
The memory check
|
I defined something like the following, vectorize the arrays first, after permutation, I reshape it back. function LinearAlgebra.permutedims!(dest::GPUArrays.AbstractGPUArray, src::GPUArrays.AbstractGPUArray, perm) where N
perm isa Tuple || (perm = Tuple(perm))
size_dest = size(dest)
size_src = size(src)
CUDA.gpu_call(vec(dest), vec(src), perm; name="permutedims!") do ctx, dest, src, perm
i = @linearidx src
I = l2c(size_src, i)
@inbounds dest[c2l(size_dest, GPUArrays.genperm(I, perm))] = src[i]
return
end
return reshape(dest, size(dest))
end |
Just to note that I'm hitting the same problem. MWE:
leads to:
|
The line errors
GPUArrays.jl/src/host/linalg.jl
Line 200 in c418821
I think this is the 16 size tuple issue. Is there some easy approach to circumvent this error?
The text was updated successfully, but these errors were encountered: