Skip to content

city96/ComfyUI-GGUF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI-GGUF

GGUF Quantization support for native ComfyUI models

This is currently very much WIP. These custom nodes provide support for model files stored in the GGUF format popularized by llama.cpp.

While quantization wasn't feasible for regular UNET models (conv2d), transformer/DiT models such as flux seem less affected by quantization. This allows running it in much lower bits per weight variable bitrate quants on low-end GPUs. For further VRAM savings, a node to load a quantized version of the T5 text encoder is also included.

Comfy_Flux1_dev_Q4_0_GGUF_1024

Note: The "Force/Set CLIP Device" is NOT part of this node pack. Do not install it if you only have one GPU. Do not set it to cuda:0 then complain about OOM errors if you do not undestand what it is for. There is not need to copy the workflow above, just use your own workflow and replace the stock "Load Diffusion Model" with the "Unet Loader (GGUF)" node.

Installation

Important

Make sure your ComfyUI is on a recent-enough version to support custom ops when loading the UNET-only.

To install the custom node normally, git clone this repository into your custom nodes folder (ComfyUI/custom_nodes) and install the only dependency for inference (pip install --upgrade gguf)

git clone https://github.com/city96/ComfyUI-GGUF

To install the custom node on a standalone ComfyUI release, open a CMD inside the "ComfyUI_windows_portable" folder (where your run_nvidia_gpu.bat file is) and use the following commands:

git clone https://github.com/city96/ComfyUI-GGUF ComfyUI/custom_nodes/ComfyUI-GGUF
.\python_embeded\python.exe -s -m pip install -r .\ComfyUI\custom_nodes\ComfyUI-GGUF\requirements.txt

On MacOS sequoia, torch 2.4.1 seems to be required, as 2.6.X nightly versions cause a "M1 buffer is not large enough" error. See this issue for more information/workarounds.

Usage

Simply use the GGUF Unet loader found under the bootleg category. Place the .gguf model files in your ComfyUI/models/unet folder.

LoRA loading is experimental but it should work with just the built-in LoRA loader node(s).

Pre-quantized models:

Initial support for quantizing T5 has also been added recently, these can be used using the various *CLIPLoader (gguf) nodes which can be used inplace of the regular ones. For the CLIP model, use whatever model you were using before for CLIP. The loader can handle both types of files - gguf and regular safetensors/bin.

See the instructions in the tools folder for how to create your own quants.

About

GGUF Quantization support for native ComfyUI models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages