This repo is an official implementation of LayerDiffuse in pure diffusers without any GUI for easier development for different projects.
Note that this repo directly uses k-diffusion to sample images (diffusers' scheduling system is not used) and one can expect SOTA sampling results directly in this repo without relying on other UIs.
Note that this project is a Work In Progress (WIP). We are going to port all features of LayerDiffuse (see also sd-forge-layerdiffuse).
You can use the below deployment:
git clone https://github.com/lllyasviel/LayerDiffuse_DiffusersCLI.git
cd LayerDiffuse_DiffusersCLI
conda create -n layerdiffuse python=3.10
conda activate layerdiffuse
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
This repo has a simple memory management system, and you only need 8GB Nvidia VRAM.
Note that model downloads are automatic.
Below features are finished.
The algorithm to convert a transparent PNG image to a "padded" image with all invisible pixels filled with smooth, continuous colors.
This padded RGB format is used for the training of all LayerDiffuse models.
python demo_rgb_padding.py
Input | Input (Alpha) | Output |
---|---|---|
Diffuse with transparency and decode the results.
python demo_sdxl_t2i.py
Prompt: "glass bottle, high quality"
Output (Transparent image) | Output (Visualization with Checkerboard) |
---|---|
Encode existing PNG images, diffuse, and decode results.
python demo_sdxl_i2i.py
Prompt: "a handsome man with curly hair, high quality"
Denoise: 0.7
Input (Transparent image) | Output (Transparent image) | Output (Visualization with Checkerboard) |
---|---|---|
The following features are going to be ported from sd-forge-layerdiffuse soon:
- SD15 transparent t2i and i2i.
- SDXL layer system.
- SD15 layer system.
- Add some possible applications using mask/inpaint.