Skip to content

FurkanGozukara/stable-diffusion-xl-demo

 
 

Repository files navigation

title emoji colorFrom colorTo sdk sdk_version app_file pinned license
Stable Diffusion XL 0.9
🔥
yellow
gray
gradio
3.11.0
app.py
true
mit

StableDiffusion XL Gradio Demo

This is a gradio demo supporting Stable Diffusion XL 0.9. This demo loads the base and the refiner model.

This is forked from StableDiffusion v2.1 Demo. Refer to the git commits to see the changes.

Update: Colab is supported! You can run this demo on Colab for free even on T4. Open In Colab

Examples

Left: SDXL 0.9. Right: SD v2.1.

Without any tuning, SDXL generates much better images compared to SD v2.1!

Example 1

Example 2

Example 3

Example 4

Example 5

Installation

With torch 2.0.1 installed, we also need to install:

pip install accelerate transformers invisible-watermark "numpy>=1.17" "PyWavelets>=1.1.1" "opencv-python>=4.1.0.25" safetensors "gradio==3.11.0"
pip install git+https://github.com/huggingface/diffusers.git@sd_xl

Launching

It's free but you need to submit a quick form to get access to the weights.

There are two ways to load the weights. After getting access to weights, you can either clone them locally or this repo can load them for you.

Option 1

If you have cloned both repo (base, refiner) locally (please change the path_to_sdxl):

PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 SDXL_MODEL_DIR=/path_to_sdxl python app.py

Option 2

If you want to load from the huggingface hub (please set up a HuggingFace access token):

PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 ACCESS_TOKEN=YOUR_HF_ACCESS_TOKEN python app.py

torch.compile support

Turn on torch.compile will make overall inference faster. However, this will add some overhead to the first run (i.e., have to wait for compilation during the first run).

To save memory

  1. Turn on pipe.enable_model_cpu_offload() and turn off pipe.to("cuda") in app.py.
  2. Turn off refiner by setting enable_refiner to False.
  3. More ways to save memory and make things faster.

Several options through environment variables

  • SDXL_MODEL_DIR and ACCESS_TOKEN: load SDXL locally or from HF hub.
  • ENABLE_REFINER=true/false turn on/off the refiner (refiner refines the generation).
  • OUTPUT_IMAGES_BEFORE_REFINER=true/false useful is refiner is enabled. Output images before and after the refiner stage.
  • SHARE=true/false creates public link (useful for sharing and on colab)

If you enjoy this demo, please give this repo a star ⭐.

About

A gradio web demo for Stable Diffusion XL 0.9

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 80.4%
  • Jupyter Notebook 19.6%