-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU OOM in val stage when without mask #101
Comments
here is complete error: Epoch 0: : 0it [00:00, ?it/s]Update finite_difference_eps to 0.027204705103003882 |
I uh reduce chunk_size from 2048 to 1024, then it works. But why does it need so much more GPU memory when without mask? |
hi, when I run "python launch.py --config configs/neus-dtu-wmask.yaml --gpu 1 --train", everything is ok,
but when I run "python launch.py --config configs/neus-dtu.yaml --gpu 1 --train", it got CUDA out of memory .
I am using the latest code where you've modify the chunk_batch function in models/utils.py as you said "move all output tensors to cpu before merging". I even set dynamic_ray_sampling=false or reduce max_train_num_rays to 2048, but the CUDA out of memory still happens.Could you please give me some advice, thx!!
The text was updated successfully, but these errors were encountered: