-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GUI and Memory Constraints #36
Comments
I just added CLI options to reduce the GUI resolution. E.g. You can additionally load fewer training images, e.g. by specifying Regarding your second question: if you're running in SDF mode and haven't got OptiX configured, slow training times like this are expected. Otherwise, it's likely the network. |
Exciting project! And thank you for giving support on GitHub. Unfortunately I'm unable to run the fox scene on a Titan Xp (SM 61) with 12 GB RAM, on Ubuntu 18.04, CUDA 11.6. ( So the RAM requirement for NeRFs is much higher than 8 GB, when using older cards and falling back to CulassMLP? Reducing the --width and --height does nothing (not even to 128x72). Output:
Disabling GUI has no effect for nerf/fox. When using nerf_synthetic/lego, it starts running, with 11.8GB GPU memory used. When removing the --no_gui flag on any of the nerf_synthetic samples, it crashes with a |
It seems that memory consumption is different across architectures, probably due to the
|
To shed some more light on the increased memory usage: there are two factors at play, actually.
|
Just a heads-up that memory requirements are down by ~1gb now. (#99) This'll hopefully make it easier to get things going on 8gb cards. |
For everyone following this: instant-ngp now requires vastly less memory ( The technical reason for the reduced memory usage is a custom memory allocator that exploits the GPU's virtual memory capabilities. The allocator is now part of tiny-cuda-nn and permits low-overhead allocs/deallocs of stream-ordered temporary storage. (Though not the same as CUDA's own stream-ordered allocations, which are slower and were therefore not an option.) This allows for much more re-use. |
I can confirm that the |
My images are 2160x3840, and I wanted to run COLMAP with high-resolution images to improve its accuracy. However I don't need that much resolution to run NeRF. I guess I could manually resize the images and edit Are you aware of a script to downsample the output of COLMAP to lower memory consumption in NeRF? |
Anybody has tried running in cloud? https://au.pcmag.com/graphics-cards/91529/no-gpu-nvidias-rtx-3080-powered-cloud-gaming-service-is-now-open-to-all |
Yes. We have tested and configured it on cloud. It works fine, you can also try. Check this out: |
Hi all, thanks so much for the hard work, this repo is really impressive.
I just have two quick questions regarding scaling the models
In the demo video I noticed that training and rendering the NeRF Lego model used about 10.8 GB of VRAM, and I see everywhere that the software was developed with a 3090. Is there a relatively easy way to scale down the resolution of the NeRF and SDF models so that less VRAM is required?
I've got the repo installed and running on a remote Ubuntu 20.04 server with a RTX 3070 8GB graphics card. When forwarding the output to my local machine, frame rates are incredibly low (less than 1 per second). I'm not sure if this is tied to my local machine's hardware, or the network we are connecting over.
The text was updated successfully, but these errors were encountered: