Skip to content
This repository has been archived by the owner on Mar 8, 2024. It is now read-only.

Unable to run the code on an RTX8000, out of memory #8

Open
mohitm1994 opened this issue Nov 7, 2022 · 0 comments
Open

Unable to run the code on an RTX8000, out of memory #8

mohitm1994 opened this issue Nov 7, 2022 · 0 comments

Comments

@mohitm1994
Copy link

Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 47.46 GiB total capacity; 44.29 GiB already allocated; 862.56 MiB free; 45.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant