You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! For all the DL libraries out there, as well as the image processing packages it would make sense to have GPU-powered upsampling (nearest, bilinear, cubic), including their derivatives. Being able to do this in any number of dimensions would be nice of course, but I think 2D would do it to get going finally. From the Flux side the issue is tackled in this PR. I ported the Caffee 2 kernels for bilinear upsampling from pytorch in this gist. Please have a look and decide if you want to have it here in some form and tell me which changes to make and where to put it eventually. I could then set up a PR, including more rigorous tests. Note that there is an older attempt here.
The text was updated successfully, but these errors were encountered:
Looks good. atomic_add! and pointer operations can be avoided by using @atomic. And I assume it wouldn't be just the kernel that's added to CUDA.jl, but more specifically an implementation of an NNlib interface. If that's the case, CUDA.jl is a good place to implement that interface.
Hi! For all the DL libraries out there, as well as the image processing packages it would make sense to have GPU-powered upsampling (nearest, bilinear, cubic), including their derivatives. Being able to do this in any number of dimensions would be nice of course, but I think 2D would do it to get going finally. From the Flux side the issue is tackled in this PR. I ported the Caffee 2 kernels for bilinear upsampling from pytorch in this gist. Please have a look and decide if you want to have it here in some form and tell me which changes to make and where to put it eventually. I could then set up a PR, including more rigorous tests. Note that there is an older attempt here.
The text was updated successfully, but these errors were encountered: