Skip to content
This repository has been archived by the owner on Mar 8, 2024. It is now read-only.
/ stable-dreambooth Public archive

Dreambooth implementation based on Stable Diffusion with minimal code.

Notifications You must be signed in to change notification settings

Victarry/stable-dreambooth

Repository files navigation

Stable DreamBooth

This is an implementation of DreamBooth based on Stable Diffusion.

Update

Results

Dreambooth results from original paper: Results

The reproduced results: Results

Requirements

Hardware

  • A GPU with at least 30G Memory.
  • The training requires about 10 minites on A100 80G GPU with batch_size set to 4.

Environment Setup

Create conda environment with pytorch>=1.11.

conda env create -f environment.yaml
conda activate stable-diffusion

Quick Start

python sample.py # Generate class samples.
python train.py # Finetune stable diffusion model.

The generation results are in logs/dog_finetune.

Finetune with your own data.

1. Data Preparation

  1. Collect 3~5 images of an object and save into data/mydata/instance folder.
  2. Sample images of the same class as specified object using sample.py.
    1. Change corresponding variables in sample.py. The prompt should be like "a {class}". And the save_dir should be changed to data/mydata/class.
    2. Run the sample script.
    python sample.py

2. Finetuning

  1. Change the TrainConfig in train.py.
  2. Start training.
    python train.py

3. Inference

python inference.py --prompt "photo of a [V] dog in a dog house" --checkpoint_dir logs/dogs_finetune

Generated images are in outputs by default.

Acknowledgement

About

Dreambooth implementation based on Stable Diffusion with minimal code.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages