Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch not compiled with CUDA enabled #24

Open
foysalremon opened this issue Sep 22, 2024 · 8 comments
Open

Torch not compiled with CUDA enabled #24

foysalremon opened this issue Sep 22, 2024 · 8 comments

Comments

@foysalremon
Copy link

I think it's because you don't add support for other gpu like intel arc which doesn't have cuda. If so, requesting to add support

Error Details

  • Node Type: CatVTONWrapper
  • Exception Type: AssertionError
  • Exception Message: Torch not compiled with CUDA enabled

Stack Trace

  File "G:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "G:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "G:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "G:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\cat_vton.py", line 42, in catvton
    pipeline = CatVTONPipeline(

  File "G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\catvton\pipeline.py", line 42, in __init__
    self.vae = AutoencoderKL.from_pretrained(os.path.join(folder_paths.models_dir, "CatVTON", "sd-vae-ft-mse")).to(device, dtype=weight_dtype)

  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)

  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)

  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)

  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)

  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")

System Information

  • ComfyUI Version: v0.2.2-58-g38c6908
  • Arguments: main.py --bf16-unet
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.1.0a0+cxx11.abi

Devices

  • Name: xpu
    • Type: xpu
    • VRAM Total: 8319483904
    • VRAM Free: 4993683456
    • Torch VRAM Total: 3487563776
    • Torch VRAM Free: 161763328

Logs

2024-09-22 16:26:39,791 - root - INFO - Total VRAM 7934 MB, total RAM 32556 MB
2024-09-22 16:26:39,791 - root - INFO - pytorch version: 2.1.0a0+cxx11.abi
2024-09-22 16:26:39,792 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-22 16:26:39,792 - root - INFO - Device: xpu
2024-09-22 16:26:39,809 - root - INFO - Using pytorch cross attention
2024-09-22 16:26:40,395 - root - INFO - [Prompt Server] web root: G:\ComfyUI\web
2024-09-22 16:26:43,101 - root - INFO - 
Import times for custom nodes:
2024-09-22 16:26:43,101 - root - INFO -    0.0 seconds: G:\ComfyUI\custom_nodes\websocket_image_save.py
2024-09-22 16:26:43,101 - root - INFO -    0.0 seconds: G:\ComfyUI\custom_nodes\ComfyUI_pose_inter
2024-09-22 16:26:43,101 - root - INFO -    0.0 seconds: G:\ComfyUI\custom_nodes\cg-use-everywhere
2024-09-22 16:26:43,101 - root - INFO -    0.0 seconds: G:\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-09-22 16:26:43,101 - root - INFO -    0.0 seconds: G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper
2024-09-22 16:26:43,101 - root - INFO -    0.1 seconds: G:\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-09-22 16:26:43,101 - root - INFO -    0.2 seconds: G:\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2024-09-22 16:26:43,101 - root - INFO -    0.2 seconds: G:\ComfyUI\custom_nodes\ComfyUI-Manager
2024-09-22 16:26:43,101 - root - INFO -    0.3 seconds: G:\ComfyUI\custom_nodes\ComfyUI-Dwpose-Tensorrt
2024-09-22 16:26:43,102 - root - INFO -    0.4 seconds: G:\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2024-09-22 16:26:43,102 - root - INFO -    0.8 seconds: G:\ComfyUI\custom_nodes\ComfyUI_LayerStyle
2024-09-22 16:26:43,102 - root - INFO - 
2024-09-22 16:26:43,109 - root - INFO - Starting server

2024-09-22 16:26:43,109 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-09-22 16:26:46,973 - root - INFO - got prompt
2024-09-22 16:26:47,032 - comfyui_segment_anything - WARNING - using extra model: G:\ComfyUI\models\sams\sam_vit_h_4b8939.pth
2024-09-22 16:27:02,235 - root - ERROR - !!! Exception during processing !!! Torch not compiled with CUDA enabled
2024-09-22 16:27:02,236 - root - ERROR - Traceback (most recent call last):
  File "G:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "G:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "G:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "G:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\cat_vton.py", line 42, in catvton
    pipeline = CatVTONPipeline(
  File "G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\catvton\pipeline.py", line 42, in __init__
    self.vae = AutoencoderKL.from_pretrained(os.path.join(folder_paths.models_dir, "CatVTON", "sd-vae-ft-mse")).to(device, dtype=weight_dtype)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

2024-09-22 16:27:02,237 - root - INFO - Prompt executed in 15.26 seconds
2024-09-22 16:27:12,577 - root - INFO - got prompt
2024-09-22 16:27:12,626 - root - ERROR - !!! Exception during processing !!! Torch not compiled with CUDA enabled
2024-09-22 16:27:12,626 - root - ERROR - Traceback (most recent call last):
  File "G:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "G:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "G:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "G:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\cat_vton.py", line 42, in catvton
    pipeline = CatVTONPipeline(
  File "G:\ComfyUI\custom_nodes\ComfyUI_CatVTON_Wrapper\py\catvton\pipeline.py", line 42, in __init__
    self.vae = AutoencoderKL.from_pretrained(os.path.join(folder_paths.models_dir, "CatVTON", "sd-vae-ft-mse")).to(device, dtype=weight_dtype)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "G:\ComfyUI\comfyui_env\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

2024-09-22 16:27:12,627 - root - INFO - Prompt executed in 0.05 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":6,"last_link_id":6,"nodes":[{"id":3,"type":"LayerMask: SegmentAnythingUltra V2","pos":{"0":414,"1":424},"size":{"0":428.4000244140625,"1":342},"flags":{},"order":2,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":1}],"outputs":[{"name":"image","type":"IMAGE","links":null,"shape":3},{"name":"mask","type":"MASK","links":[2,3],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"LayerMask: SegmentAnythingUltra V2"},"widgets_values":["sam_vit_h (2.56GB)","GroundingDINO_SwinT_OGC (694MB)",0.3,"VITMatte",6,6,0.01,0.99,false,"shirt, pants,","cuda",2],"color":"rgba(27, 80, 119, 0.7)"},{"id":4,"type":"LayerMask: MaskPreview","pos":{"0":903,"1":444},"size":{"0":277.20001220703125,"1":246},"flags":{},"order":3,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":2}],"outputs":[],"properties":{"Node name for S&R":"LayerMask: MaskPreview"},"color":"rgba(27, 80, 119, 0.7)"},{"id":5,"type":"CatVTONWrapper","pos":{"0":901,"1":32},"size":{"0":315,"1":218},"flags":{},"order":4,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":5},{"name":"mask","type":"MASK","link":3},{"name":"refer_image","type":"IMAGE","link":4}],"outputs":[{"name":"image","type":"IMAGE","links":[6],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CatVTONWrapper"},"widgets_values":[25,"fp16",549557970686997,"randomize",40,2.5]},{"id":6,"type":"PreviewImage","pos":{"0":1273,"1":33},"size":{"0":323.8887023925781,"1":455.15203857421875},"flags":{},"order":5,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":6}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"}},{"id":1,"type":"LoadImage","pos":{"0":45,"1":28},"size":{"0":309.2167053222656,"1":333.2966613769531},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[1,5],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["13735824_1108784102510186_1530724342781074013_o.jpg","image"]},{"id":2,"type":"LoadImage","pos":{"0":45,"1":423},"size":{"0":308.3948059082031,"1":348.0320739746094},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[4],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["71AihluQr1L._AC_SX569__upscayl_4x_ultrasharp.jpg","image"]}],"links":[[1,1,0,3,0,"IMAGE"],[2,3,1,4,0,"MASK"],[3,3,1,5,1,"MASK"],[4,2,0,5,2,"IMAGE"],[5,1,0,5,0,"IMAGE"],[6,5,0,6,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9090909090909095,"offset":[291.0420536792561,101.14600019424024]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@chflame163
Copy link
Owner

I did not have corresponding device testing, so in order to prevent problems, I removed the code for these devices.

@foysalremon
Copy link
Author

@chflame163 Could this be added? I have an Intel Arc 750 available for testing. Alternatively, could it be provided in a separate branch for those who need it? Non-CUDA GPUs are becoming more popular these days, so it would be great to support them.

@zixuan96
Copy link

I did not have corresponding device testing, so in order to prevent problems, I removed the code for these devices.

I'm using Apple M2 and M3 chip. I will be very grateful if you could add them back, so that other guys and I can use this extension and then provide feedback on Non-CUDA GPUs.

@chflame163
Copy link
Owner

I tried to run it on intel CPU,but was unsuccessful, so I don't plan to continue. Sorry guys.

@malinowskij
Copy link

malinowskij commented Dec 17, 2024

Just edit: cat_vton.py from 'cuda' to 'mps:
image

But now I have other error :D :
image

Ok, no longer valid problem.
I modified code that now use windowed attention and it works.
image

But... not as expected :D
From this:
image

To:
image

Time for buying RTX with CUDA support, not apple silicon 😛

@zer0factor
Copy link

zer0factor commented Dec 21, 2024

like @malinowskij . Solved with these step.
Screenshot 2024-12-22 at 3 40 42 PM

ComfyUI_CatVTON_Wrapper
cat_vton.py
device = "mps"

segment_anything
build_sam_hq.py
state_dict = torch.load(f, map_location=torch.device("mps"))

ComfyUI_CatVTON_Wrapper
attn_processor.py
Did some memory managment here.

@eljabbaryhicham
Copy link

i tried your solution but the node won't load!!

@zer0factor
Copy link

i tried your solution but the node won't load!!

You need to install whatever is missing. Which we wouldn’t know what from just one sentence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants