Replies: 67 comments 144 replies
-
only XL? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Works great, thank you for your efforts. Things to watch out for at the moment: |
Beta Was this translation helpful? Give feedback.
-
awesome this time i didnt make gradio and now implemented :D if face input not changed you really should cache it i did this for IP Adapter Face ID and it did speed up huge. caching the calculated face input vectors |
Beta Was this translation helpful? Give feedback.
-
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:43<00:00, 2.19s/it] |
Beta Was this translation helpful? Give feedback.
-
*** Error running process: D:\Webui1111\Webui\stable-diffusion-portable-main\extensions\sd-webui-controlnet\scripts\controlnet.py |
Beta Was this translation helpful? Give feedback.
-
I get an assertion error using all the same settings as above with an SDXL model of course.
|
Beta Was this translation helpful? Give feedback.
-
Working fine. Thank you. |
Beta Was this translation helpful? Give feedback.
-
Does it work also in img2img? More or less like "roop" ? |
Beta Was this translation helpful? Give feedback.
-
Thanks for your work! However, after playing around with this for a bit, something seems off compared to using InstantId through the new Diffusers pipeline. Let me demonstrate with an example (note, however, that the same happens with every image combination I've tried). I'm using a picture of (Daniel Radcliffe as) Harry Potter in ControlUnit 0 (ip-adapter) and a picture of Obama in ControlUnit 1. Here's the result with the "Control Weight" of Unit 0 set equal to 1.0: Obviously, the quality is rather poor. The CFG scale is set to 4.0 and I use the euler-a sampler. Now, if I turn the "Control Weight" of Unit 0 down to 0.0, I get this: This looks just slightly less like Radcliffe's Potter than the first image, but the quality is obviously vastly better. If I try values of Unit 0's "Control Weight" between 0.0 and 1.0, the quality is always degraded (although not as much as at 1.0) – even at low values like 0.2, the output image is noticeably blurry. If I keep the "Control Weight" at 1.0 but decrease the CFG scale to 3.0 or 2.0, the quality improves somewhat but the blurriness is still there (and the contrast gets worse): Across several different input image combinations I've found that I get the best compromise between quality and identity fidelity when ControlUnit 0's "Control Weight" is 0.0 or a very low value like 0.05 or 0.1, and the CFG scale is as high as I can push it without getting too much contrast (usually around 4.0). That's somewhat odd, and I don't experience the same when using InstantId directly – the best combination of weights is then something like 0.8 and 0.8. Any thoughts? |
Beta Was this translation helpful? Give feedback.
-
How to deal with multiple faces in one picture? i tested it on the webui, and instantid only handled one face |
Beta Was this translation helpful? Give feedback.
-
Seems to download the onnx files and then complains about the onnxruntime. I tried deleting and forcing it to reinstall the onnx files and the onnxruntime from the venv, but no luck. Thoughts? Downloading: "https://huggingface.co/DIAMONIK7777/antelopev2/resolve/main/1k3d68.onnx" to D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\antelopev2\1k3d68.onnx 100%|███████████████████████████████████████████████████████████████████████████████| 137M/137M [00:10<00:00, 14.0MB/s] 100%|█████████████████████████████████████████████████████████████████████████████| 4.80M/4.80M [00:00<00:00, 12.8MB/s] 100%|█████████████████████████████████████████████████████████████████████████████| 1.26M/1.26M [00:00<00:00, 8.26MB/s] 100%|███████████████████████████████████████████████████████████████████████████████| 249M/249M [00:19<00:00, 13.6MB/s] 100%|█████████████████████████████████████████████████████████████████████████████| 16.1M/16.1M [00:01<00:00, 13.1MB/s] Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} |
Beta Was this translation helpful? Give feedback.
-
Great feature, thanks a lot !! |
Beta Was this translation helpful? Give feedback.
-
Hello, I have this error in Automatic1111 for Mac M2 *** Error running process: /Users/cinesaibai/pinokio/api/sd-webui.pinokio.git/automatic1111/extensions/sd-webui-controlnet/scripts/controlnet.py |
Beta Was this translation helpful? Give feedback.
-
It works absolute best if your input face have half shadow on it so the volume of the face is read better by ipadapter , with flat light stallone was unrecognizable |
Beta Was this translation helpful? Give feedback.
-
I uninstalled the CN on my computer and removed it from the https://github.com/huchenlei/sd-webui-controlnet/tree/fix_insight?tab=readme -OV file has re downloaded the CN update, and also re downloaded two processors and models, but the problem still exists. May I ask if the steps are incorrect. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Is there still no way to edit the keypoints or even create keypoints from scratch without using a reference image? The is an 'edit' button which opens Photopea - but I couldn't figure out what this is supposed to do, since Photopea doesn't seem to treat the keypoints as editable vectors - and even if it did - how would I save the edit so that it is then used by A1111 again? Am I missing a major thing here? |
Beta Was this translation helpful? Give feedback.
-
I found |
Beta Was this translation helpful? Give feedback.
-
Has anyone been able to get this working with Oobabooga text generation web ui via the API? Seems there would be some amazing possibilities... |
Beta Was this translation helpful? Give feedback.
-
After the 1.1.447 update, Instant ID stopped working, it gives an error |
Beta Was this translation helpful? Give feedback.
-
Hi there! Any tips for reduce the blurriness / improve the quality of the image ? Im using :
Only stuff I imagine can improve the quality is pixel perfect on both unit but my laptop is not powerful... worth to use it if I rent a gpu cloud ? |
Beta Was this translation helpful? Give feedback.
-
hey can we use this instant id in stable diffusion forge |
Beta Was this translation helpful? Give feedback.
-
will this model be updated for the FLUX model? |
Beta Was this translation helpful? Give feedback.
-
All this does not work now, I tried it on different builds of SD. Automatic 1111 version v1.10.0, model juggernautXL_version6Rundiffusion.safetensors [1fe6c7ec54] although I tried another XL model, I tried different resolutions. In the first tab ControlNet Unit 0 [Instant-ID] Preprocessor instant_id_face_embedding, Model ip-adapter_instant_id_sdxl [eb2d3ec0], the second tab ControlNet Unit 1 [Instant-ID] Preprocessor instant_id_face_keypoints, Model control_instant_id_sdxl [c5c25a50] nothing happens, the face does not change. ControlNet was reinstalling. Does it even work for anyone now??? |
Beta Was this translation helpful? Give feedback.
-
Forge is now just a preview browser for Flux. They don't give a shit about actually using it in a professional or creative setting. |
Beta Was this translation helpful? Give feedback.
-
Why do the characters I generate always appear in the middle of the image without any action, even though I specifically include actions in my prompt? I tried many many times, different prompts, control weight, and so on... but no lucky. Did I miss something? mid shot,bust of A man is playing golf and is about to hit the ball,from front, Steps: 28, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 3, Seed: 2070426131, Size: 1280x768, Model: DreamShaperXL_Turbo_v2_1, VAE: sdxl_vae.safetensors, ControlNet 0: "Module: instant_id_face_embedding, Model: ip-adapter_instant_id_sdxl [eb2d3ec0], Weight: 1.0, Resize Mode: Resize and Fill, Processor Res: 768, Threshold A: 0.5, Threshold B: 0.5, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: True, Control Mode: My prompt is more important", ControlNet 1: "Module: instant_id_face_keypoints, Model: control_instant_id_sdxl [c5c25a50], Weight: 1.0, Resize Mode: Resize and Fill, Processor Res: 512, Threshold A: 0.5, Threshold B: 0.5, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: True, Control Mode: My prompt is more important", Version: 1.10.1 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Instant ID project
https://github.com/InstantID/InstantID
Instant ID uses a combination of ControlNet and IP-Adapter to control the facial features in the diffusion process. One unique design for Instant ID is that it passes facial embedding from IP-Adapter projection as crossattn input to the ControlNet unet. Normally the crossattn input to the ControlNet unet is prompt's text embedding.
Download models
You need to download following models and put them under
{A1111_root}/models/ControlNet
directory. It is also required to rename models toip-adapter_instant_id_sdxl
andcontrol_instant_id_sdxl
so that they can be correctly recognized by the extension.How to use
InstantID takes 2 models on the UI. You should always set the ipadapter model as first model, as the ControlNet model takes the output from the ipadapter model. (ipadapter model should be hooked first)
Unit 0 Setting
You must set ip-adapter unit right before the ControlNet unit. The projected face embedding output of IP-Adapter unit will be used as part of input to the next ControlNet unit.
Unit 1 Setting
The ControlNet unit accepts a keypoint map of 5 facial keypoints. You are not restricted to use the facial keypoints of the same person you used in Unit 0. Here I use a different person's facial keypoints.
CFG
It is recommended to set CFG 4~5 to get best result. Depending on sampling method and base model this number may vary, but generally you need to use CFG scale a little bit less than normal CFG.
Output
Follow-up work
Note
As the insightface's github release currently do not have antelopev2 model, we are downloading from a huggingface mirror https://huggingface.co/DIAMONIK7777/antelopev2. If you are in mainland China and don't have good internet connection to huggingface, you can manually download the model from somewhere else and place them under
extensions/sd-webui-controlnet/annotators/downloads/insightface/models/antelopev2
.Known issues
Beta Was this translation helpful? Give feedback.
All reactions