Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spatial Transformation fails with downsample_factor =1.0 #28

Open
mongoose54 opened this issue Oct 21, 2015 · 1 comment
Open

Spatial Transformation fails with downsample_factor =1.0 #28

mongoose54 opened this issue Oct 21, 2015 · 1 comment

Comments

@mongoose54
Copy link

I am running the Spatial Transformation example (https://github.com/Lasagne/Recipes/blob/master/examples/spatial_transformer_network.ipynb) downsample_factor =1.0 and I am getting the following error:

MemoryError: Error allocating 110231552 bytes of device memory (out of memory).
Apply node that caused the error: GpuElemwise{Composite{(i0 * (i1 + Abs(i1)))},no_inplace}(CudaNdarrayConstant{[[[[ 0.5]]]]}, GpuElemwise{Add}[(0, 0)].0)
Toposort index: 279
Inputs types: [CudaNdarrayType(float32, (True, True, True, True)), CudaNdarrayType(float32, 4D)]
Inputs shapes: [(1, 1, 1, 1), (256, 32, 58, 58)]
Inputs strides: [(0, 0, 0, 0), (107648, 3364, 58, 1)]
Inputs values: [<CudaNdarray object at 0x7fef7314c7f0>, 'not shown']
Outputs clients: [[GpuContiguous(GpuElemwise{Composite{(i0 * (i1 + Abs(i1)))},no_inplace}.0)]]

Can anyone replicate the same issue?

@mongoose54 mongoose54 changed the title Spatial Transformation fails with colored images. Spatial Transformation fails with downsample_factor =1.0 Oct 21, 2015
@jamesguoxin
Copy link

Seems like a memory overflow? I encountered a similar problem. When I change downsample_factor=1.0, I have error message:

MemoryError: Error allocating 4305920000 bytes of device memory (out of memory).
Apply node that caused the error: GpuAllocEmpty(Shape_i{0}.0, Shape_i{0}.0, Elemwise{Composite{((((i0 + i1) - i2) // i3) + i3)}}[(0, 0)].0, Elemwise{Composite{((((i0 + i1) - i2) // i3) + i3)}}[(0, 0)].0)
Toposort index: 188
Inputs types: [TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)]
Inputs shapes: [(), (), (), ()]
Inputs strides: [(), (), (), ()]
Inputs values: [array(10000), array(32), array(58), array(58)]
Outputs clients: [[GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='valid', subsample=(1, 1), conv_mode='conv'}.0, Constant{1.0}, Constant{0.0})]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Because I'm using GTX 970M on my laptop, clearly 4305920000 bytes (roughly more than 4G) is more than my GPU's memory capacity (3 G)

Maybe this will cause problem??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants