You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
Now I'm just trying to use the inference code to upscale some examples, and I use A40 48G GPU. However, it still reminds me OOM.
So Please tell me the minimum requirements for inference, or if you can, please tell me some methods to reduce GPU memory.
Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hi! Now I'm just trying to use the inference code to upscale some examples, and I use A40 48G GPU. However, it still reminds me OOM. So Please tell me the minimum requirements for inference, or if you can, please tell me some methods to reduce GPU memory. Thanks a lot!
And I was hoping that my 4090 will manage it :) thank you for providing the reference, but it's strange seeing how many starts this repo has one would think it is usable in real life...
@sczhou would be grateful if you could advise on the minimum VRAM requirements. I've tried it on a RTX A6000 and also get OOM error.
Loading Upscale-A-Video
[1/1] Processing video: testclip
Traceback (most recent call last):
File "/home/Upscale-A-Video/inference_upscale_a_video.py", line 215, in <module>
output = vframes.new_zeros(output_shape)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.59 GiB (GPU 0; 47.53 GiB total capacity; 9.55 GiB already allocated; 37.55 GiB free; 9.64 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
(.venv) root@488a72695c82:/home/Upscale-A-Video#
I've tried the following: export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
Hi!
Now I'm just trying to use the inference code to upscale some examples, and I use A40 48G GPU. However, it still reminds me OOM.
So Please tell me the minimum requirements for inference, or if you can, please tell me some methods to reduce GPU memory.
Thanks a lot!
The text was updated successfully, but these errors were encountered: