The Install git link for WebUI is https://github.com/lllyasviel/webui-controlnet-v1-archived.git
You need to completely remove ControlNet WebUI Extension V1.1 if you use this old version.
You need to completely remove this old version if you want to update/install ControlNet WebUI Extension V1.1.
If you want to use this version, after install this old version 1.0, make sure that your setting is exactly same as:
and make sure that you have completely restarted your A1111 (including your terminal).
This archive is mainly for performance comparison. If you sometimes feel ControlNet 1.0 gives better results than 1.1, you may use this archive to verify if that is your feeling or that is true.
We want to make sure that ControlNet 1.1 gives better results than 1.0 in at least 80% cases and gives similar results in the remaining 20% cases. If you can find any case that this archived 1.0 version is better, please let us know in the Issue of ControlNet 1.1.
(WIP) WebUI extension for ControlNet and T2I-Adapter
This extension is for AUTOMATIC1111's Stable Diffusion web UI, allows the Web UI to add ControlNet to the original Stable Diffusion model to generate images. The addition is on-the-fly, the merging is not required.
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
Thanks & Inspired: kohya-ss/sd-webui-additional-networks
- Dragging large file on the Web UI may freeze the entire page. It is better to use the upload file option instead.
- Just like WebUI's hijack, we used some interpolate to accept arbitrary size configure (see
scripts/cldm.py
)
- Open "Extensions" tab.
- Open "Install from URL" tab in the tab.
- Enter URL of this repo to "URL for extension's git repository".
- Press "Install" button.
- Reload/Restart Web UI.
Upgrade gradio if any ui issues occured: pip install gradio==3.16.2
- Put the ControlNet models (
.pt
,.pth
,.ckpt
or.safetensors
) inside themodels/ControlNet
folder. - Open "txt2img" or "img2img" tab, write your prompts.
- Press "Refresh models" and select the model you want to use. (If nothing appears, try reload/restart the webui)
- Upload your image and select preprocessor, done.
Currently it supports both full models and trimmed models. Use extract_controlnet.py
to extract controlnet from original .pth
file.
Pretrained Models: https://huggingface.co/lllyasviel/ControlNet/tree/main/models
Two methods can be used to reduce the model's filesize:
-
Directly extract controlnet from original .pth file using
extract_controlnet.py
. -
Transfer control from original checkpoint by making difference using
extract_controlnet_diff.py
.
All type of models can be correctly recognized and loaded. The results of different extraction methods are discussed in lllyasviel/ControlNet#12 and Mikubill#73.
Pre-extracted model: https://huggingface.co/webui/ControlNet-modules-safetensors
Pre-extracted difference model: https://huggingface.co/kohya-ss/ControlNet-diff-modules
- Don't forget to add some negative prompt, default negative prompt in ControlNet repo is "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality".
- Regarding canvas height/width: they are designed for canvas generation. If you want to upload images directly, you can safely ignore them.
Source | Input | Output |
---|---|---|
(no preprocessor) | ||
(no preprocessor) | ||
(From TencentARC/T2I-Adapter)
T2I-Adapter is a small network that can provide additional guidance for pre-trained text-to-image models.
To use T2I-Adapter models:
- Download files from https://huggingface.co/TencentARC/T2I-Adapter
- Copy corresponding config file and rename it to the same name as the model - see list below.
- It's better to use a slightly lower strength (t) when generating images with sketch model, such as 0.6-0.8. (ref: ldm/models/diffusion/plms.py)
Adapter | Config |
---|---|
t2iadapter_canny_sd14v1.pth | sketch_adapter_v14.yaml |
t2iadapter_sketch_sd14v1.pth | sketch_adapter_v14.yaml |
t2iadapter_seg_sd14v1.pth | image_adapter_v14.yaml |
t2iadapter_keypose_sd14v1.pth | image_adapter_v14.yaml |
t2iadapter_openpose_sd14v1.pth | image_adapter_v14.yaml |
t2iadapter_color_sd14v1.pth | t2iadapter_color_sd14v1.yaml |
t2iadapter_style_sd14v1.pth | t2iadapter_style_sd14v1.yaml |
Note:
- This implement is experimental, result may differ from original repo.
- Some adapters may have mapping deviations (see issue lllyasviel/ControlNet#255)
Source | Input | Output |
---|---|---|
(no preprocessor) | ||
(no preprocessor) | ||
(no preprocessor) | ||
(no preprocessor) | ||
(clip, non-image) |
Examples by catboxanon, no tweaking or cherrypicking. (Color Guidance)
Image | Disabled | Enabled |
---|---|---|
- (Windows) (NVIDIA: Ampere) 4gb - with
--xformers
enabled, andLow VRAM
mode ticked in the UI, goes up to 768x832
The original ControlNet applies control to both conditional (cond) and unconditional (uncond) parts. Enabling this option will make the control only apply to the cond part. Some experiments indicate that this approach improves image quality.
To enable this option, tick Enable CFG-Based guidance for ControlNet
in the settings.
Note that you need to use a low cfg scale/guidance scale (such as 3-5) and proper weight tuning to get good result.
Guess Mode is CFG Based ControlNet + Exponential decay in weighting.
See issue Mikubill#236 for more details.
Original introduction from controlnet:
The "guess mode" (or called non-prompt mode) will completely unleash all the power of the very powerful ControlNet encoder.
In this mode, you can just remove all prompts, and then the ControlNet encoder will recognize the content of the input control map, like depth map, edge map, scribbles, etc.
This mode is very suitable for comparing different methods to control stable diffusion because the non-prompted generating task is significantly more difficult than prompted task. In this mode, different methods' performance will be very salient.
For this mode, we recommend to use 50 steps and guidance scale between 3 and 5.
This option allows multiple ControlNet inputs for a single generation. To enable this option, change Multi ControlNet: Max models amount (requires restart)
in the settings. Note that you will need to restart the WebUI for changes to take effect.
- Guess Mode will apply to all ControlNet if any of them are enabled.
Source A | Source B | Output |
---|---|---|
Weight is the weight of the controlnet "influence". It's analogous to prompt attention/emphasis. E.g. (myprompt: 1.2). Technically, it's the factor by which to multiply the ControlNet outputs before merging them with original SD Unet.
Guidance Start/End is the percentage of total steps the controlnet applies (guidance strength = guidance end). It's analogous to prompt editing/shifting. E.g. [myprompt::0.8] (It applies from the beginning until 80% of total steps)
This extension can accept txt2img or img2img tasks via API or external extension call. Note that you may need to enable Allow other scripts to control this extension
in settings for external calls.
To use the API: start WebUI with argument --api
and go to http://webui-address/docs
for documents or checkout examples.
To use external call: Checkout Wiki
Tested with pytorch nightly: Mikubill#143 (comment)
To use this extension with mps and normal pytorch, currently you may need to start WebUI with --no-half
.
Quick start:
# Run WebUI in API mode
python launch.py --api --xformers
# Install/Upgrade transformers
pip install -U transformers
# Install deps
pip install langchain==0.0.101 openai
# Run exmaple
python example/chatgpt.py