-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
out of memory issue #4
Comments
also got out of ram: `Error occurred when executing PipelineLoader: Allocation on device 0 would exceed allowed memory. (out of memory) File "/home/runner/ComfyUI/execution.py", line 151, in recursive_execute |
Yes it is a known issue that needs to be fixed, I only managed to make it work on gpu with 32gb+ of vram. Don't hesitate to make PRs if you find a way to fix it :) |
sure I'll take a look |
Is there a way we can solve out of memory issue now? |
Here's the Error I got! Some weights of the model checkpoint were not used when initializing UNet2DConditionModel: |
I've encountered this error too. When I launch your nodes with your example. It works good. It uses preloaded image and applies garment. The problem is when I add SD Checkpoint and try to send generated image into Run IDM-VTON Inference node, where it throws this error. My Specs:
I think we need some kind of memory unloading? |
Hi, @TemryL , PipelineLoader node crashes ComfyUI no matter what I choose f32/f16/b |
fixed by adding 16 GB more RAM to the wsl, so 24 GB VRAM + 32 GB RAM seems enough |
hello,My server also reported out of memory with 24 GB VRAM and 240 GB RAM. Are you sure you have resolved this issue? |
@kunkun-zhu hi, |
should fix:
RuntimeError: MPS backend out of memory (MPS allocated: 16.56 GB, other allocations: 128.70 MB, max allowed: 18.13 GB). Tried to allocate 3.00 GB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
same issue on cuda with small gpu
The text was updated successfully, but these errors were encountered: