-
Notifications
You must be signed in to change notification settings - Fork 12
hi,how to release the gpu memory after call the function 'NVVLVideoLoader'? #9
Comments
@blankWorld I think this is supposed to get cleaned up automatically with the _dealloc_ method but I'm also experiencing the same thing with memory increasing progressively. It might be that I'm calling new VideoLoaders faster than the memory is released? |
I'm facing the same problem.
However, I don't think that this is an appropriate way because the overhead of freeing GPU memory is so large. |
@yuyay Your solution works just fine. |
Right now I find it hard to manage the memory when using pynvvl in PyTorch dataloader. The problem is when you dlpack a cupy array and use from_dlpack to convert it into a tensor, the array is not recognized as a allocated memory by PyTorch. Meanwhile, as you can still access the array (from_dlpack does not copy the array, the array stays in the original place), the memory cannot be freed by calling cupy.get_default_memory_pool().free_all_blocks(). As a result, there happens a memory leak, where the dlpacked cupy array can neither be freed with cupy function, nor can it be automatically handled by PyTorch after its training iteration ends. The memory consumption will progressively increase until it hits the maximum size available and an out-of-memory error will raise. Has anyone come up with a solution for this yet? |
loader = pynvvl.NVVLVideoLoader(device_id=0, log_level='error')
video = loader.read_sequence(video_root).get()
With my dataloader code, the gpu memory is increasing progressively.
what should i do to release gpu memory?
The text was updated successfully, but these errors were encountered: