-
Notifications
You must be signed in to change notification settings - Fork 483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the required GPU memory for running this project? #4
Comments
@LwAiBug To solve the CUDA out-of-memory error, I discover that I need to resize the video resolution before using this annotation tool. |
Hello, thanks for the feedback. The required GPU memory depends on the video's resolution. Below are some example resolutions and the estimated GPU memory usage:
@TuanTNG resizing is a good idea, and we are working on it and will enable the model to support resizing before tracking soon. Thanks. |
I can understand ,that there will be a better solution without changing the video resolution in the future |
running out of memory with inpainting, 1080p / rtx 3090, Linux 22.04 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.48 GiB (GPU 0; 23.69 GiB total capacity; 17.76 GiB already allocated; 625.69 MiB free; 18.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
if you're on windows, go to command prompt and type close command prompt and try again |
I am getting the same error on a 4090 with 24Gb trying to inpaint a video 1000x562 |
@g8392 @teidenzero Sorry the GPU memory requirements listed above are only for tracking. The model would cost much more for inpainting. We have added resizing to down-scale the resolution before inpainting. Please see the screen shot below: where you can determine the ratio for down-scaling via the sliding controller. Thanks. |
can you please provide rough estimates of how much VRAM is needed for inpainting for usual resolutions like 720p, 1080p? Thanks! |
Based on my personal experience, I have used E2FGVI inpainting to process many videos. For a 720x1280 resolution video, with over 40GB of GPU memory, you can process 100 frames of video at once. I usually set the neighbor_stride to 5, reducing this parameter can help to decrease the consumption of GPU memory, but may also reduce the quality of inpainting for some videos. I use an A40 graphics card, which can complete the inpainting of 100 frames of a 720x1280 resolution video in about 80 seconds. I hope this answer can be used as a reference. |
@entrusc @g8392 @teidenzero hello, the table below is the estimated GPU memory requirements for inpainting (with default config in E2FGVI) (OOM indicates > 48GB):
It is observed that GPU MEMORY requirement in E2FGVI depends on both video resolution and video length. This is because E2FGVI evenly samples frames as the temporal context. The longer the video the more video frames are involved during inpainting, leading to the Out-Of-Memory (OOM). @zhangjingzj96 thanks for your information! 🚩 Now we have shifted E2FGVI from inpainting the whole video to sequentially inpainting a set of sub-videos (with fixed length, e.g., 50), which effectively decouples GPU memory requirements and video length. After modification, the estimated GPU memory requirements are (with default configurations):
As mentioned by @zhangjingzj96 , reducing neighbor_stride can further reduce GPU memory usage, it is accessible in Track-Anything in: Track-Anything/inpainter/config/config.yaml Lines 1 to 7 in 48f8574
Also, decreasing num_external_ref or increasing step can also reduce the memory usage. Besides adjusting the E2FGVI configurations, Track-Anything supports resizing the video before inpainting, as mentioned in my previous response. Thanks. |
if we used Track-Anything to do a sequence of image masks, would the limit for video length be practically erased? |
Error - GPU / Driver Specifications - nvidia-smi +---------------------------------------------------------------------------------------+ Already tried - 1. Setting max split size pytorch installed using the command - pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 please help! |
For some tasks that don't need much masking corrections, you can edit "vos_tracking_video" function inside app.py and instead of passing all frames at once to model.generator(), you can call this method for every n frames, and use the last prediction as the template_mask for the frames in each step. You should also save results and clear memory in each step. |
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.17 GiB (GPU 0; 14.84 GiB total capacity; 11.81 GiB already allocated; 1.67 GiB free; 12.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Still getting errors even with 16GB GPU memory
The text was updated successfully, but these errors were encountered: