-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: CUDA error: out of memory #52
Comments
The dataset is NeRF-DS. |
Hi, thanks for your interest in this work. In my experiments, NeRF-DS will not encounter oom on 3090. Can you provide the command leading to this problem? |
The command I executed is as follows: |
I have encountered the same issue on RTX4090. |
@yangbaoquan, I uniformly sampled 52 images from the original dataset and fixed this error, but the reconstruction was poor. |
@RuiqingTang As a temporary solution, I comment the following lines and run training code successfully. # Log and save
# cur_psnr = training_report(tb_writer, iteration, Ll1, loss, l1_loss, iter_start.elapsed_time(iter_end),
# testing_iterations, scene, render, (pipe, background), deform,
# dataset.load2gpu_on_the_fly, dataset.is_6dof)
# if iteration in testing_iterations:
# if cur_psnr.item() > best_psnr:
# best_psnr = cur_psnr.item()
# best_iteration = iteration |
My GPU is 4060 with 8GB of VRAM. Is it too small? However, I saw someone using a 3090 and still encountering errors.
#38
Here is the info:
Training progress: 12%|█▎ | 5000/40000 [06:40<48:38, 11.99it/s, Loss=0.0467618]Traceback (most recent call last):
File "E:\Projects\Python_projects\3D_Vsion\Deformable-3D-Gaussians\train.py", line 274, in
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations)
File "E:\Projects\Python_projects\3D_Vsion\Deformable-3D-Gaussians\train.py", line 132, in training
dataset.load2gpu_on_the_fly, dataset.is_6dof)
File "E:\Projects\Python_projects\3D_Vsion\Deformable-3D-Gaussians\train.py", line 221, in training_report
images = torch.cat((images, image.unsqueeze(0)), dim=0)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Training progress: 12%|█▎ | 5000/40000 [06:40<46:44, 12.48it/s, Loss=0.0467618]
The text was updated successfully, but these errors were encountered: