Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimum requirements for inference #31

Open
ShaochengShen opened this issue Nov 27, 2024 · 2 comments
Open

Minimum requirements for inference #31

ShaochengShen opened this issue Nov 27, 2024 · 2 comments

Comments

@ShaochengShen
Copy link

Hi!
Now I'm just trying to use the inference code to upscale some examples, and I use A40 48G GPU. However, it still reminds me OOM.
So Please tell me the minimum requirements for inference, or if you can, please tell me some methods to reduce GPU memory.
Thanks a lot!

@C00reNUT
Copy link

Hi! Now I'm just trying to use the inference code to upscale some examples, and I use A40 48G GPU. However, it still reminds me OOM. So Please tell me the minimum requirements for inference, or if you can, please tell me some methods to reduce GPU memory. Thanks a lot!

And I was hoping that my 4090 will manage it :) thank you for providing the reference, but it's strange seeing how many starts this repo has one would think it is usable in real life...

@codecowboy
Copy link

@sczhou would be grateful if you could advise on the minimum VRAM requirements. I've tried it on a RTX A6000 and also get OOM error.

Loading Upscale-A-Video
[1/1] Processing video:  testclip
Traceback (most recent call last):
  File "/home/Upscale-A-Video/inference_upscale_a_video.py", line 215, in <module>
    output = vframes.new_zeros(output_shape)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.59 GiB (GPU 0; 47.53 GiB total capacity; 9.55 GiB already allocated; 37.55 GiB free; 9.64 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
(.venv) root@488a72695c82:/home/Upscale-A-Video# 

I've tried the following:
export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

which didn't resolve the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants