Unofficial implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing.
# requirements
conda env create -f environment.yml
conda activate DragDiffusion
pip install -r requirements.txt
# To obtain the feature from StableDiffusion Unet (WIP)
mv assets/unet_2d_condition.py YOUR_CONDA_ENV/site-packages/diffusers/models/
# run demo
python visualizer_drag_gradio.py
- drag process
- mask
- Gradio GUI
- imgui GUI
Following Diffusers to obtain pre-trained StableDiffuison model.