Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making an autoencoder use CUDA? #30

Open
daviddoria opened this issue Apr 6, 2015 · 11 comments
Open

Making an autoencoder use CUDA? #30

daviddoria opened this issue Apr 6, 2015 · 11 comments

Comments

@daviddoria
Copy link

I've tried setting the encoder, decoder, the autoencoder itself, and the data to cuda using:

autoencoderModel:cuda()
autoencoderModel.encoder:cuda()
autoencoderModel.decoder:cuda()
samples.cuda()

but I get this error:

bad argument #1 to 'set' (expecting number or Tensor or storage)
stack traceback:
[C]: in function 'set'
...torch7/share/lua/5.1/nn/Module.lua:201: in function 'flatten'
...torch7/share/lua/5.1/nn/Module.lua:243: in function 'getParameters'

Is it possible to make these autoencoders work with CUDA?

@mdushkoff
Copy link

I figured out how to do this recently actually after building an autoencoder from scratch. What you have to do is simple, after you make an autoencoder model, maybe it's named autoencoderModel.

You have to move the following onto your GPU:

autoencoderModel.encoder:cuda()
autoencoderModel.decoder:cuda()
autoencoderModel.loss:cuda()

After you do this, you should be able to forward any torch.CudaTensor through the autoencoder model with no issues as long as your input dimensions match.

You would probably expect autoencoderModel:cuda() to do this by itself, but this behavior is not what actually occurs.

The loss function has to be put on as well or else your loss will complain that it is not receiving the correct type of Tensor, however it looks like your issue might be slightly different...

@daviddoria
Copy link
Author

Hm, it doesn't seem like "loss" is a member of the autoencoder?

Here is the error: "attempt to index field 'loss' (a nil value)"

And the code that produced it:

  local autoencoder = unsup.PSD(encoder, decoder, beta)

  if opt.type == 'cuda' then
    autoencoder.encoder:cuda()
    autoencoder.decoder:cuda()
    autoencoder.loss:cuda()

end

@mdushkoff
Copy link

The PSD module utilizes a field called predcost instead of loss so you would do:

local autoencoder = unsup.PSD(encoder, decoder, beta)

if opt.type == 'cuda' then
    autoencoder.encoder:cuda()
    autoencoder.decoder:cuda()
    autoencoder.predcost:cuda()
end

Hopefully that works, but I have not tested it. Also there is an error using CUDA when computing the diagonal Hessian components which I have yet to figure out yet which might also prevent this from working (you could always just disable that).

@daviddoria
Copy link
Author

Have you used the convolutional autoencoder classes with CUDA? It looks like SpatialConvFistaL1 and ConvPsd? I am getting an "Inconsistent parameter types. torch.FloatTensor ~= torch.CudaTensor" error when trying to use those classes with CUDA.

@koraykv
Copy link
Owner

koraykv commented Apr 28, 2015

This relates to my last comment in issue #31 . I have written these classes a long time ago and have not been maintaining lately. The modules and custom classes most probably have to be modified to be able to work with cuda.

@erosennin
Copy link

I'm currently trying to make ConvPsd work with CUDA. There is no CUDA implementation of nn.SpatialConvolutionMap AFAIK, so I've replaced it with nn.SpatialConvolution:

--- a/ConvPsd.lua
+++ b/ConvPsd.lua
@@ -14,6 +14,7 @@ function ConvPSD:__init(conntable, kw, kh, iw, ih, lambda, beta, params)
    local decodertable = conntable:clone()
    decodertable:select(2,1):copy(conntable:select(2,2))
    decodertable:select(2,2):copy(conntable:select(2,1))
+   local inputFeatures = conntable:select(2,1):max()
    local outputFeatures = conntable:select(2,2):max()

    -- decoder is L1 solution
@@ -26,15 +27,15 @@ function ConvPSD:__init(conntable, kw, kh, iw, ih, lambda, beta, params)
    self.params.encoderType = params.encoderType or 'linear'

    if params.encoderType == 'linear' then
-      self.encoder = nn.SpatialConvolutionMap(conntable, kw, kh, 1, 1)
+      self.encoder = nn.SpatialConvolution(inputFeatures, outputFeatures, kw, kh, 1, 1)
    elseif params.encoderType == 'tanh' then
       self.encoder = nn.Sequential()
-      self.encoder:add(nn.SpatialConvolutionMap(conntable, kw, kh, 1, 1))
+      self.encoder:add(nn.SpatialConvolution(inputFeatures, outputFeatures, kw, kh, 1, 1))
       self.encoder:add(nn.Tanh())
       self.encoder:add(nn.Diag(outputFeatures))
    elseif params.encoderType == 'tanh_shrink' then
       self.encoder = nn.Sequential()
-      self.encoder:add(nn.SpatialConvolutionMap(conntable, kw, kh, 1, 1))
+      self.encoder:add(nn.SpatialConvolution(inputFeatures, outputFeatures, kw, kh, 1, 1))
       self.encoder:add(nn.TanhShrink())
       self.encoder:add(nn.Diag(outputFeatures))
    else

In theory, this should work exactly the same way, but It does not. I'm getting an error message about negative second derivatives:

$ luajit  demo_psd_conv.lua
<...>
Starting Training
Computing Hessian
Min Hessian=0.02 Max Hessian=500
luajit: ...a/src/torch/install/share/lua/5.1/unsup/UnsupTrainer.lua:106: Negative ddx
stack traceback:
        [C]: in function 'error'
        ...a/src/torch/install/share/lua/5.1/unsup/UnsupTrainer.lua:106: in function 'computeDiagHessian'
        ...a/src/torch/install/share/lua/5.1/unsup/UnsupTrainer.lua:36: in function 'train'
        demo_psd_conv.lua:181: in function 'train'
        demo_psd_conv.lua:192: in main chunk
        [C]: at 0x00406690

Any ideas, why simply replacing nn.SpatialConvolutionMap with nn.SpatialConvolution makes such a difference? Where should I look?

@viorik
Copy link

viorik commented Sep 14, 2015

I managed to get conv_psd run with CUDA, if anyone is interested https://github.com/viorik/unsupgpu

@soumith
Copy link
Collaborator

soumith commented Oct 4, 2015

@viorik that's nice. you should consider adding a link to the cheatsheet, where the unsup entry is.

@viorik
Copy link

viorik commented Oct 5, 2015

@soumith ok, I'll do that; also I am making now few more changes to make it run on mini-batches, otherwise running with cuda is not very useful.

@johnyboyoh
Copy link

@viorik sounds good. BTW, I failed to install your package as it appears to be unknown to luarocks.

@viorik
Copy link

viorik commented Oct 12, 2015

Right, that's because I haven't submitted a rockspec for inclusion in the rocks server. I am newbie, so I am still in the experimental stage :). But if you git clone my repo, you should be able to do luarocks make locally and use it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants