Skip to content

Latest commit

 

History

History
105 lines (55 loc) · 3.34 KB

README.md

File metadata and controls

105 lines (55 loc) · 3.34 KB

Important

Errors may occur on Windows due to paths being formatted for linux without accounting for Windows. Let me know if you face any error in the issues page.

LCM_Inpaint-Outpaint_Comfy

ComfyUI custom nodes for inpainting/outpainting using the new latent consistency model (LCM)

Mix Images

Inpaint

Outpaint

Prompt Weighting

Note: Requires CPU inference (select CPU in LCMLoader Node). (facing error that i dont know how to fix when using GPU)

Add '+' for more effect and '-' for less effect. Adding more '+' or '-' increases the effects.

FreeU

ReferenceOnly

Style Transfer

Image Variations

Promptless Outpainting

Image Blending

ControlNet/T2I Adapter

T2IAdapter thanks to Michael Poutre https://github.com/M1kep

Place model folders inside 'ComfyUI/models/controlnet'

IP Adapter

Place model folders inside 'ComfyUI/models/controlnet'

Canvas Inpaint/Outpaint/img2img

In your Terminal/cmd at the directory where your ComfyUI folder is:

cd ComfyUI/custom_nodes/LCM_Inpaint_Outpaint_Comfy/CanvasTool

python setup.py

How to Use:

Clone into custom_nodes folder inside your ComfyUI directory

git clone https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy

Install requirements after changing directory to LCM_Inpaint-Outpaint_Comfy folder

cd LCM_Inpaint-Outpaint_Comfy
pip install -r requirements.txt

Download the model in diffusers format from https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7/tree/main and place it inside model/diffusers folder in your ComfyUI directory. (The name of the model folder should be "LCM_Dreamshaper_v7")

Load the workflow by choosing the .json file for inpainting or outpainting.

Credits

nagolinc's img2img script

Michael Poutre https://github.com/M1kep T2IAdapters