-
Notifications
You must be signed in to change notification settings - Fork 818
Description
^^ when running the workflow, it gets to the "WANVIDEO SAMPLER" node @ 86% then the error appears. When I look into the folder, it appears the JSON file is there but, a new temp file is being created every single time resulting in the error. How can I go about fixing this?
log
Adding extra search path custom_nodes C:\Users\JARRA\Documents\ComfyUI\custom_nodes
Adding extra search path download_model_base C:\Users\JARRA\Documents\ComfyUI\models
Adding extra search path custom_nodes C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\custom_nodes
Setting output directory to: C:\Users\JARRA\Documents\ComfyUI\output
Setting input directory to: C:\Users\JARRA\Documents\ComfyUI\input
Setting user directory to: C:\Users\JARRA\Documents\ComfyUI\user
[START] Security scan
[DONE] Security scan
** ComfyUI startup time: 2026-01-08 16:33:18.744
** Platform: Windows
** Python version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]
** Python executable: C:\Users\JARRA\Documents\ComfyUI.venv\Scripts\python.exe
** ComfyUI Path: C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI
** ComfyUI Base Folder Path: C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI
** User directory: C:\Users\JARRA\Documents\ComfyUI\user
** ComfyUI-Manager config path: C:\Users\JARRA\Documents\ComfyUI\user_manager\config.ini
** Log path: C:\Users\JARRA\Documents\ComfyUI\user\comfyui.log
[ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated.
[PRE] ComfyUI-Manager
Checkpoint files will always be loaded safely.
C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\cuda_init.py:283: UserWarning:
Found GPU1 NVIDIA GeForce GTX 1080 which is of cuda capability 6.1.
Minimum and Maximum cuda capability supported by this version of PyTorch is
(7.5) - (12.0)
warnings.warn(
C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\cuda_init_.py:304: UserWarning:
Please install PyTorch with a following CUDA
configurations: 12.6 following instructions at
https://pytorch.org/get-started/locally/
warnings.warn(matched_cuda_warn.format(matched_arches))
C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\cuda_init_.py:326: UserWarning:
NVIDIA GeForce GTX 1080 with CUDA capability sm_61 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_75 sm_80 sm_86 sm_90 sm_100 sm_120.
If you want to use the NVIDIA GeForce GTX 1080 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
Total VRAM 12288 MB, total RAM 32624 MB
pytorch version: 2.9.1+cu130
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 Ti : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 14680.0
working around nvidia conv3d memory bug.
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Using pytorch attention
Python version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]
ComfyUI version: 0.8.2
[Prompt Server] web root: C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app
[START] ComfyUI-Manager
[ComfyUI-Manager] network_mode: public
[ComfyUI-Manager] The matrix sharing feature has been disabled because the matrix-nio dependency is not installed.
To use this feature, please run the following command:
C:\Users\JARRA\Documents\ComfyUI.venv\Scripts\python.exe -m pip install matrix-nio
Total VRAM 12288 MB, total RAM 32624 MB
pytorch version: 2.9.1+cu130
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 Ti : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 14680.0
'resemble-perth' not found. Watermarking will be unavailable.
[MultiGPU Core Patching] Patching mm.soft_empty_cache for Comprehensive Memory Management (VRAM + CPU + Store Pruning)
[MultiGPU Core Patching] Patching mm.get_torch_device, mm.text_encoder_device, mm.unet_offload_device
[MultiGPU DEBUG] Initial current_device: cuda:0
[MultiGPU DEBUG] Initial current_text_encoder_device: cuda:0
[MultiGPU DEBUG] Initial current_unet_offload_device: cpu
[MultiGPU] Initiating custom_node Registration. . .
custom_node Found Nodes
ComfyUI-LTXVideo N 0
ComfyUI-Florence2 N 0
ComfyUI_bitsandbytes_NF4 N 0
x-flux-comfyui N 0
ComfyUI-MMAudio N 0
ComfyUI-GGUF N 0
PuLID_ComfyUI N 0
ComfyUI-WanVideoWrapper Y 20
[MultiGPU] Registration complete. Final mappings: CheckpointLoaderAdvancedMultiGPU, CheckpointLoaderAdvancedDisTorch2MultiGPU, UNetLoaderLP, UNETLoaderMultiGPU, VAELoaderMultiGPU, CLIPLoaderMultiGPU, DualCLIPLoaderMultiGPU, TripleCLIPLoaderMultiGPU, QuadrupleCLIPLoaderMultiGPU, CLIPVisionLoaderMultiGPU, CheckpointLoaderSimpleMultiGPU, ControlNetLoaderMultiGPU, DiffusersLoaderMultiGPU, DiffControlNetLoaderMultiGPU, UNETLoaderDisTorch2MultiGPU, VAELoaderDisTorch2MultiGPU, CLIPLoaderDisTorch2MultiGPU, DualCLIPLoaderDisTorch2MultiGPU, TripleCLIPLoaderDisTorch2MultiGPU, QuadrupleCLIPLoaderDisTorch2MultiGPU, CLIPVisionLoaderDisTorch2MultiGPU, CheckpointLoaderSimpleDisTorch2MultiGPU, ControlNetLoaderDisTorch2MultiGPU, DiffusersLoaderDisTorch2MultiGPU, DiffControlNetLoaderDisTorch2MultiGPU, LoadWanVideoT5TextEncoderMultiGPU, WanVideoTextEncodeMultiGPU, WanVideoTextEncodeCachedMultiGPU, WanVideoTextEncodeSingleMultiGPU, WanVideoVAELoaderMultiGPU, WanVideoTinyVAELoaderMultiGPU, WanVideoBlockSwapMultiGPU, WanVideoImageToVideoEncodeMultiGPU, WanVideoDecodeMultiGPU, WanVideoModelLoaderMultiGPU, WanVideoSamplerMultiGPU, WanVideoVACEEncodeMultiGPU, WanVideoEncodeMultiGPU, LoadWanVideoClipTextEncoderMultiGPU, WanVideoClipVisionEncodeMultiGPU, WanVideoControlnetLoaderMultiGPU, FantasyTalkingModelLoaderMultiGPU, Wav2VecModelLoaderMultiGPU, WanVideoUni3C_ControlnetLoaderMultiGPU, DownloadAndLoadWav2VecModelMultiGPU
WanVideoWrapper WARNING: FantasyPortrait nodes not available: No module named 'onnxruntime'
Import times for custom nodes:
0.0 seconds: C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-MelBandRoFormer
0.0 seconds: C:\Users\JARRA\Documents\ComfyUI\custom_nodes\comfyui-multigpu
0.1 seconds: C:\Users\JARRA\Documents\ComfyUI\custom_nodes\comfyui-kjnodes
0.1 seconds: C:\Users\JARRA\Documents\ComfyUI\custom_nodes\comfyui-videohelpersuite
0.7 seconds: C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper
2.0 seconds: C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-ChatterboxTTS
Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Starting server
To see the GUI go to: http://127.0.0.1:8000
comfyui-frontend-package not found in requirements.txt
[DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/widgetInputs.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
comfyui-frontend-package not found in requirements.txt
got prompt
C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\functional.py:681: UserWarning: A window was not provided. A rectangular window will be applied,which is known to cause spectral leakage. Other windows such as torch.hann_window or torch.hamming_window are recommended to reduce spectral leakage.To suppress this warning and use a rectangular window, explicitly set window=torch.ones(n_fft, device=<device>). (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\aten\src\ATen\native\SpectralOps.cpp:842.)
return _VF.stft( # type: ignore[attr-defined]
Converted mono input to stereo.
Resampling input 24000 to 44100
Processing chunks: 100%|██████████| 2/2 [00:00<00:00, 2.38it/s]
[MultiTalk] --- Raw speaker lengths (samples) ---
speaker 1: 124160 samples (shape: torch.Size([1, 1, 124160]))
[MultiTalk] Audio duration (194 frames) is shorter than requested (400 frames). Using 194 frames.
[MultiTalk] total raw duration = 7.760s
[MultiTalk] multi_audio_type=para | final waveform shape=torch.Size([1, 1, 124160]) | length=124160 samples | seconds=7.760s (expected max of raw)
T5Encoder: 100%|██████████| 24/24 [00:00<00:00, 68.24it/s]
prompt token count: tensor([12], device='cuda:0')
T5Encoder: 100%|██████████| 24/24 [00:00<00:00, 294.20it/s]
prompt token count: tensor([98], device='cuda:0')
[MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)
Requested to load CLIPVisionModelProjection
loaded completely; 8339.57 MB usable, 1208.10 MB loaded, full load: True
Clip embeds shape: torch.Size([1, 257, 1280]), dtype: torch.float32
Combined clip embeds shape: torch.Size([1, 257, 1280])
CUDA Compute Capability: 8.6
Detected model in_channels: 36
Model cross attention type: i2v, num_heads: 40, num_layers: 40
Model variant detected: i2v_480
InfiniteTalk detected, patching model...
model_type FLOW
Loading LoRA: lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16 with strength: 1.0
Using GGUF to load and assign model weights to device...
Loading transformer parameters to cuda:0: 100%|██████████| 1633/1633 [00:00<00:00, 11428.58it/s]
------- Scheduler info -------
Total timesteps: tensor([999, 961, 886, 800, 725, 687], device='cuda:0')
Using timesteps: tensor([999, 961, 886, 800, 725, 687], device='cuda:0')
Using sigmas: tensor([1.0000, 0.9616, 0.8866, 0.8008, 0.7259, 0.6875, 0.0000])
sigmas: tensor([1.0000, 0.9616, 0.8866, 0.8008, 0.7259, 0.6875, 0.0000])
Multitalk audio features shapes (per speaker): [(194, 12, 768)]
Rope function: comfy
Multitalk mode: infinitetalk
Sampling 194 frames in 3 windows, at 480x832 with 6 steps
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
FETCH DATA from: C:\Users\JARRA\Documents\ComfyUI\user__manager\cache\1514988643_custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
WanVAE encoded input:torch.Size([1, 3, 81, 832, 480]) to torch.Size([1, 32, 21, 104, 60])
[WanVAE encode] Allocated memory: memory=7.142 GB
[WanVAE encode] Max allocated memory: max_memory=10.316 GB
[WanVAE encode] Max reserved memory: max_reserved=12.312 GB
WanVAE encoded input:torch.Size([1, 3, 1, 832, 480]) to torch.Size([1, 32, 1, 104, 60])
[WanVAE encode] Allocated memory: memory=6.948 GB
[WanVAE encode] Max allocated memory: max_memory=7.952 GB
[WanVAE encode] Max reserved memory: max_reserved=12.125 GB
Sampling audio indices 0-81: 0%| | 0/6 [00:00<?, ?it/s]Generated new RoPE frequencies
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] Triton compilation failed: triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] def triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2(in_out_ptr0, in_ptr0, in_ptr1, in_ptr2, xnumel, XBLOCK : tl.constexpr):
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] xnumel = 26214400
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] xoffset = tl.program_id(0) * XBLOCK
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] xmask = tl.full([XBLOCK], True, tl.int1)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] x2 = xindex // 256
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] x1 = ((xindex // 32) % 8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] x0 = (xindex % 32)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] x6 = xindex
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] x3 = (xindex % 5120)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] x4 = xindex // 5120
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp0 = tl.load(in_ptr0 + (72x2), None, eviction_policy='evict_last').to(tl.float32)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp28 = tl.load(in_ptr1 + (16 + x0 + 32(x1 // 2) + 144x2), None)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp40 = tl.load(in_ptr2 + (72(x3 // 256) + 1440x4), None, eviction_policy='evict_last').to(tl.float32)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp1 = x1
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp2 = tl.full([1], 0, tl.int64)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp3 = tmp1 >= tmp2
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp4 = tl.full([1], 4, tl.int64)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp5 = tmp1 < tmp4
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp6 = tl.load(in_ptr1 + (4 + 144x2 + (x1)), tmp5, eviction_policy='evict_last', other=0.0)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp7 = tl.full([1], 63, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp8 = tmp6 & tmp7
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp9 = tl.full(tmp8.shape, 0.0, tmp8.dtype)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp10 = tl.where(tmp5, tmp8, tmp9)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp11 = tmp1 >= tmp4
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp12 = tl.full([1], 8, tl.int64)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp13 = tmp1 < tmp12
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp14 = tl.load(in_ptr1 + (12 + 144x2 + ((-4) + x1)), tmp11, eviction_policy='evict_last', other=0.0)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp15 = tl.full([1], 15, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp16 = tmp14 & tmp15
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp17 = tl.load(in_ptr1 + (4 + 144x2 + ((-4) + x1)), tmp11, eviction_policy='evict_last', other=0.0)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp18 = tl.full([1], 2, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp19 = tmp17 >> tmp18
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp20 = tl.full([1], 48, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp21 = tmp19 & tmp20
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp22 = tmp16 | tmp21
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp23 = tl.full(tmp22.shape, 0.0, tmp22.dtype)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp24 = tl.where(tmp11, tmp22, tmp23)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp25 = tl.where(tmp5, tmp10, tmp24)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp26 = tmp25.to(tl.float32)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp27 = tmp0 * tmp26
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp29 = ((((x6 // 32) % 8)) % 2)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp30 = tl.full([1], 1, tl.int64)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp31 = tmp29 < tmp30
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp32 = tl.full([1], 0, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp33 = tl.full([1], 4, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp34 = tl.where(tmp31, tmp32, tmp33)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp35 = tmp28 >> tmp34
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp36 = tl.full([1], 15, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp37 = tmp35 & tmp36
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp38 = tmp37.to(tl.float32)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp39 = tmp27 * tmp38
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp41 = ((x3 % 256)) // 32
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp42 = tmp41 >= tmp2
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp43 = tmp41 < tmp4
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp44 = tl.load(in_ptr1 + (8 + 144*(x3 // 256) + 2880x4 + (((x3 % 256)) // 32)), tmp43, eviction_policy='evict_last', other=0.0)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp45 = tl.full([1], 63, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp46 = tmp44 & tmp45
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp47 = tl.full(tmp46.shape, 0.0, tmp46.dtype)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp48 = tl.where(tmp43, tmp46, tmp47)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp49 = tmp41 >= tmp4
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp50 = tmp41 < tmp12
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp51 = tl.load(in_ptr1 + (12 + 144(x3 // 256) + 2880x4 + ((-4) + (((x3 % 256)) // 32))), tmp49, eviction_policy='evict_last', other=0.0)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp52 = tl.full([1], 4, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp53 = tmp51 >> tmp52
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp54 = tl.load(in_ptr1 + (8 + 144(x3 // 256) + 2880*x4 + ((-4) + (((x3 % 256)) // 32))), tmp49, eviction_policy='evict_last', other=0.0)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp55 = tl.full([1], 2, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp56 = tmp54 >> tmp55
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp57 = tl.full([1], 48, tl.uint8)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp58 = tmp56 & tmp57
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp59 = tmp53 | tmp58
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp60 = tl.full(tmp59.shape, 0.0, tmp59.dtype)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp61 = tl.where(tmp49, tmp59, tmp60)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp62 = tl.where(tmp43, tmp48, tmp61)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp63 = tmp62.to(tl.float32)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp64 = tmp40 * tmp63
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tmp65 = tmp39 - tmp64
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] tl.store(in_out_ptr0 + (x6), tmp65, None)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1]
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] metadata: {'signature': {'in_out_ptr0': '*fp16', 'in_ptr0': '*fp16', 'in_ptr1': '*u8', 'in_ptr2': '*fp16', 'xnumel': 'i32', 'XBLOCK': 'constexpr'}, 'device': 0, 'constants': {'XBLOCK': 1024}, 'configs': [{(0,): [['tt.divisibility', 16]], (1,): [['tt.divisibility', 16]], (2,): [['tt.divisibility', 16]], (4,): [['tt.divisibility', 16]]}], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 1, 'debug': True, 'cc': 86}
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] Traceback (most recent call last):
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\runtime\triton_heuristics.py", line 778, in _precompile_config
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] binary = triton.compile(*compile_args, **compile_kwargs)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\triton\compiler\compiler.py", line 349, in compile
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] fn_cache_manager.put_group(metadata_filename, metadata_group)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\triton\runtime\cache.py", line 96, in put_group
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] return self.put(grp_contents, grp_filename, binary=False)
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\triton\runtime\cache.py", line 117, in put
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] with open(temp_path, mode) as f:
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] ^^^^^^^^^^^^^^^^^^^^^
E0108 16:35:26.795000 24568 Lib\site-packages\torch_inductor\runtime\triton_heuristics.py:780] [0/0_1] FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\JARRA\AppData\Local\Temp\torchinductor_JARRA\triton\0\YGMCYRRSOS4PIDJL6VS3N4XFTCS4MXREF32NRXHKMJAU7DJDQQNQ\tmp.26fc6c94\__grp__triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2.json'
Error during model prediction: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\JARRA\AppData\Local\Temp\torchinductor_JARRA\triton\0\YGMCYRRSOS4PIDJL6VS3N4XFTCS4MXREF32NRXHKMJAU7DJDQQNQ\tmp.26fc6c94\__grp__triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2.json'
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
Error during sampling: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\JARRA\AppData\Local\Temp\torchinductor_JARRA\triton\0\YGMCYRRSOS4PIDJL6VS3N4XFTCS4MXREF32NRXHKMJAU7DJDQQNQ\tmp.26fc6c94\__grp__triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2.json'
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
!!! Exception during processing !!! FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\JARRA\AppData\Local\Temp\torchinductor_JARRA\triton\0\YGMCYRRSOS4PIDJL6VS3N4XFTCS4MXREF32NRXHKMJAU7DJDQQNQ\tmp.26fc6c94\__grp__triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2.json'
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
Traceback (most recent call last):
File "C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\JARRA\OneDrive\Documents\ComfyUI\resources\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 3243, in process
raise e
File "C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 2545, in process
noise_pred, _, self.cache_state = predict_with_cfg(
^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 1747, in predict_with_cfg
raise e
File "C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 1594, in predict_with_cfg
noise_pred_cond, noise_pred_ovi, cache_state_cond = transformer(
^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 3173, in forward
x, x_ip, lynx_ref_feature, x_ovi = block(x, x_ip=x_ip, lynx_ref_feature=lynx_ref_feature, x_ovi=x_ovi, x_onetoall_ref=x_onetoall_ref, onetoall_freqs=onetoall_freqs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_dynamo\eval_frame.py", line 414, in call
return super().call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_dynamo\eval_frame.py", line 845, in compile_wrapper
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\compile_fx.py", line 990, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\compile_fx.py", line 974, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\compile_fx.py", line 1695, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\compile_fx.py", line 1505, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\graph.py", line 2319, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\graph.py", line 2329, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\graph.py", line 2397, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\codecache.py", line 3548, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\runtime\compile_tasks.py", line 33, in _reload_python_module
exec(code, mod.dict, mod.dict)
File "C:\Users\JARRA\AppData\Local\Temp\torchinductor_JARRA\vt\cvtcth3r2bq24xgslvskqy4lgsaspdjtwcxebaxkx4fd4q3tz5qx.py", line 261, in
triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2 = async_compile.triton('triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2', '''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\async_compile.py", line 500, in triton
kernel.precompile(
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\runtime\triton_heuristics.py", line 448, in precompile
self._precompile_worker()
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\runtime\triton_heuristics.py", line 470, in _precompile_worker
compile_results.append(self._precompile_config(c))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\torch_inductor\runtime\triton_heuristics.py", line 778, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\triton\compiler\compiler.py", line 349, in compile
fn_cache_manager.put_group(metadata_filename, metadata_group)
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\triton\runtime\cache.py", line 96, in put_group
return self.put(grp_contents, grp_filename, binary=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\JARRA\Documents\ComfyUI.venv\Lib\site-packages\triton\runtime\cache.py", line 117, in put
with open(temp_path, mode) as f:
^^^^^^^^^^^^^^^^^^^^^
torch._inductor.exc.InductorError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\JARRA\AppData\Local\Temp\torchinductor_JARRA\triton\0\YGMCYRRSOS4PIDJL6VS3N4XFTCS4MXREF32NRXHKMJAU7DJDQQNQ\tmp.26fc6c94\__grp__triton_poi_fused___rshift___apply_lora_bitwise_and_bitwise_or_cat_lift_fresh_mul_select_split_split_with_sizes_sub_view_2.json'
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
Prompt executed in 90.71 seconds
Sampling audio indices 0-81: 0%| | 0/6 [00:22<?, ?it/s]
Thank you in advance.