This is a custom node for IP-Adapter in ComfyUI.
- Clone into custom_nodes
- Place the IP-Adapter in
ComfyUI/custom_nodes/IPAdapter-ComfyUI/models
(e.g., ip-adapter_sdxl.bin). - Place the CLIP_vision model in
ComfyUI/models/clip_vision
(e.g., pytorch_model.bin.
Start ComfyUI and load the workflow ip-adapter.json
.
- model: Connect the SDXL base and refiner models.
- image: Reference image.
- clip_vision: Connect to the output of
Load CLIP Vision
. - mask: Optional. Connect a mask to limit the area of application. The mask should have the same resolution as the generated image.
- weight: Strength of the application.
- model_name: Specify the filename of the model to use.
- dtype: If a black image is generated, select
fp32
. The generation time hardly changes, so it might be fine to leave it asfp32
.
- MODEL: Connect to KSampler, etc.
- CLIP_VISION_OUTPUT: Normally you don't have to worry about it. Can save unnecessary calculations when using Revision, etc.
- For some reason,
Apply ControlNet
bugs out, so please useApply ControlNet(Advanced)
as an alternative.
- Official models: https://huggingface.co/h94/IP-Adapter