Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HunyuanVideo gets stuck when loading video encoder #68

Open
gioxyer opened this issue Dec 8, 2024 · 8 comments
Open

HunyuanVideo gets stuck when loading video encoder #68

gioxyer opened this issue Dec 8, 2024 · 8 comments

Comments

@gioxyer
Copy link

gioxyer commented Dec 8, 2024

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: C:\Users\giorg\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
The config attributes {'mid_block_causal_attn': True} were passed to AutoencoderKLCausal3D, but are not expected and will be ignored. Please verify your config.json configuration file.
encoded latents shape torch.Size([1, 16, 11, 96, 96])
Loading text encoder model (clipL) from: C:\Users\giorg\Documents\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
Text encoder to dtype: torch.float16
Loading tokenizer (clipL) from: C:\Users\giorg\Documents\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
C:\Users\giorg\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: huggingface/transformers#31884
warnings.warn(
Loading text encoder model (llm) from: C:\Users\giorg\Documents\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 4/4 [12:41<00:00, 190.36s/it]

Screenshot 2024-12-08 at 15-59-38 42% hyvideo_v2v_example_01(1) - ComfyUI

@kijai
Copy link
Owner

kijai commented Dec 8, 2024

How much memory do you have? It looks like not enough, if you update the nodes there should be quantization option for the text encoder, it requires bitsandbytes installed but reduces the memory use of the text encoder by a lot.

@gioxyer
Copy link
Author

gioxyer commented Dec 8, 2024

I have 8gb of VRAM, in which part of text encoder is there that options? Thanks

@kijai
Copy link
Owner

kijai commented Dec 8, 2024

You should have this option

image

For 8gb you would also need to use the block_swap -feature, check the example workflow for more info.

@gioxyer
Copy link
Author

gioxyer commented Dec 8, 2024

I have these on your nodes

Screenshot (18)
Screenshot (19)

@kijai
Copy link
Owner

kijai commented Dec 8, 2024

Then the nodes are not up to date.

@gioxyer
Copy link
Author

gioxyer commented Dec 8, 2024

I reinstall from manager but gives this issue now

immagine

@raultresd
Copy link

raultresd commented Dec 13, 2024

image
image
image
image

I have the same problem and I don't think it´s a memory lack problem in my case.

@raultresd
Copy link

I solved the problem updating the dependencies, which only updated torch to 2.5.1
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants