Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error occurred when executing PipelineLoader: Allocation on device #34

Open
Be-coder opened this issue May 21, 2024 · 7 comments
Open

Comments

@Be-coder
Copy link

Be-coder commented May 21, 2024

微信图片_20240521095912 微信图片_20240521101325

What is the problem? Is there not enough memory? Do you currently have a suitable solution? Please, thank you!!!

@TemryL
Copy link
Owner

TemryL commented May 21, 2024

Hi, thanks for your report, can you show the full trace?

@Be-coder
Copy link
Author

微信图片_20240521095912 微信图片_20240521100149

Here!
And weight_dtype: float16.
thank you very much.

@TemryL
Copy link
Owner

TemryL commented May 21, 2024

Most likely OOM exception as #4. Can you specify your GPU config?

@Be-coder
Copy link
Author

it is RTX 4060,VRAM:8G,RAM:16GB.
The memory usage during runtime has reached its highest level, while the graphics memory usage is not high. Can my configuration run your code?

@AfterHAL
Copy link

Hi.
Same error for me.
Running on RX4080 16GB Vram (2GB used by system)
image

GPU VRAM is filled up in 10 seconds.
image

@Pythonpa
Copy link

+1
GPU 3090 24G VRAM。Below is the error info

Error occurred when executing IDM-VTON:

Allocation on device

File "C:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\nodes\idm_vton.py", line 100, in make_inference
images = pipeline(
File "C:\ComfyUI\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\idm_vton\tryon_pipeline.py", line 1630, in call
mask, masked_image_latents = self.prepare_mask_latents(
File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\idm_vton\tryon_pipeline.py", line 961, in prepare_mask_latents
masked_image_latents = self._encode_vae_image(masked_image, generator=generator)
File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\idm_vton\tryon_pipeline.py", line 921, in _encode_vae_image
image_latents = retrieve_latents(self.vae.encode(image), generator=generator)
File "C:\ComfyUI\python\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 260, in encode
h = self.encoder(x)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\diffusers\models\autoencoders\vae.py", line 172, in forward
sample = down_block(sample)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1465, in forward
hidden_states = resnet(hidden_states, temb=None)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\diffusers\models\resnet.py", line 332, in forward
hidden_states = self.norm1(hidden_states)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\normalization.py", line 287, in forward
return F.group_norm(
File "C:\ComfyUI\python\lib\site-packages\torch\nn\functional.py", line 2588, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)

@ZombieNeighbor
Copy link

Hello,

Same error here The Pipeline Loader is very slow and fail after 2-5mn
then fail in Out Of Memory error

image


!!! Exception during processing!!! Allocation on device
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\nodes\pipeline_loader.py", line 68, in load_pipeline
).requires_grad_(False).eval().to(DEVICE)
^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 5 more times]
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
^^^^^
torch.cuda.OutOfMemoryError: Allocation on device

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants