You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to run the training on a GPU with ID 1. So to added the argument 1 to function call [ model.to_gpu(1) ] in rrain.py . While I have ~4gb available on the GPU, when I run the network with batchsize of 32, I get the following error
cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory
I want to run the training on a GPU with ID 1. So to added the argument 1 to function call [ model.to_gpu(1) ] in rrain.py . While I have ~4gb available on the GPU, when I run the network with batchsize of 32, I get the following error
cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory
Here are the parameters of training model
--model models/AlexNet_flic.py
--gpu 0
--epoch 1000
--batchsize 32
--snapshot 10
--datadir data/FLIC-full
--channel 3
--flip 1
--size 220
--crop_pad_inf 1.5
--crop_pad_sup 2.0
--shift 5
--lcn 1
--joint_num 7 \
Am I doing something wrong in the way I am changing the GPU ID or is there some other problem?
The text was updated successfully, but these errors were encountered: