Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

execute error #130

Open
atang220 opened this issue Dec 27, 2024 · 0 comments
Open

execute error #130

atang220 opened this issue Dec 27, 2024 · 0 comments

Comments

@atang220
Copy link

image

ComfyUI Error Report

Error Details

  • Node ID: 17
  • Node Type: PulidInsightFaceLoader
  • Exception Type: AssertionError
  • Exception Message:

Stack Trace

  File "/root/autodl-tmp/ComyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/root/autodl-tmp/ComyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/root/autodl-tmp/ComyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/root/autodl-tmp/ComyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "/root/autodl-tmp/ComyUI/custom_nodes/PuLID_ComfyUI/pulid.py", line 240, in load_insightface
    model = FaceAnalysis(name="antelopev2", root=INSIGHTFACE_DIR, providers=[provider + 'ExecutionProvider',]) # alternative to buffalo_l

  File "/root/miniconda3/lib/python3.10/site-packages/insightface/app/face_analysis.py", line 43, in __init__
    assert 'detection' in self.models

System Information

  • ComfyUI Version: v0.3.5-4-g95d8713
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu118

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25430786048
    • VRAM Free: 23437518840
    • Torch VRAM Total: 1677721600
    • Torch VRAM Free: 32581624
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant