Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement (bilinear) upsampling #613

Open
maxfreu opened this issue Dec 28, 2020 · 1 comment
Open

Implement (bilinear) upsampling #613

maxfreu opened this issue Dec 28, 2020 · 1 comment
Labels
enhancement New feature or request

Comments

@maxfreu
Copy link
Contributor

maxfreu commented Dec 28, 2020

Hi! For all the DL libraries out there, as well as the image processing packages it would make sense to have GPU-powered upsampling (nearest, bilinear, cubic), including their derivatives. Being able to do this in any number of dimensions would be nice of course, but I think 2D would do it to get going finally. From the Flux side the issue is tackled in this PR. I ported the Caffee 2 kernels for bilinear upsampling from pytorch in this gist. Please have a look and decide if you want to have it here in some form and tell me which changes to make and where to put it eventually. I could then set up a PR, including more rigorous tests. Note that there is an older attempt here.

@maxfreu maxfreu added the enhancement New feature or request label Dec 28, 2020
@maleadt
Copy link
Member

maleadt commented Jan 4, 2021

Looks good. atomic_add! and pointer operations can be avoided by using @atomic. And I assume it wouldn't be just the kernel that's added to CUDA.jl, but more specifically an implementation of an NNlib interface. If that's the case, CUDA.jl is a good place to implement that interface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants