Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory #17

Open
karunaahuja opened this issue Mar 25, 2016 · 0 comments

Comments

@karunaahuja
Copy link

I want to run the training on a GPU with ID 1. So to added the argument 1 to function call [ model.to_gpu(1) ] in rrain.py . While I have ~4gb available on the GPU, when I run the network with batchsize of 32, I get the following error

cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory

Here are the parameters of training model

--model models/AlexNet_flic.py
--gpu 0
--epoch 1000
--batchsize 32
--snapshot 10
--datadir data/FLIC-full
--channel 3
--flip 1
--size 220
--crop_pad_inf 1.5
--crop_pad_sup 2.0
--shift 5
--lcn 1
--joint_num 7 \

Am I doing something wrong in the way I am changing the GPU ID or is there some other problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant