Jiangshan Wang1,2, Junfu Pu2, Zhongang Qi2, Jiayi Guo1, Yue Ma3,
Nisha Huang1, Yuxin Chen2, Xiu Li1, Ying Shan2
1 Tsinghua University, 2 Tencent ARC Lab, 3 HKUST
We propose RF-Solver to solve the rectified flow ODE with less error, thus enhancing both sampling quality and inversion-reconstruction accuracy for rectified-flow-based generative models. Furthermore, we propose RF-Edit to leverage the RF-Solver for image and video editing tasks. Our methods achieve impressive performance on various tasks, including text-to-image generation, image/video inversion, and image/video editing.
- [2024.11.30] Our demo is available on π€ Huggingface Space!
- [2024.11.18] More examples for style transfer are available!
- [2024.11.18] Gradio Demo for image editing is available!
- [2024.11.16] Thanks to @logtd for integrating RF-Solver into ComfyUI!
- [2024.11.11] The homepage of the project is available!
- [2024.11.08] Code for image editing is released!
- [2024.11.08] Paper released!
- βοΈ Release the gradio demo
- βοΈ Release scripts for more image editing cases
- β Release the code for video editing
We derive the exact formulation of the solution for Rectified Flow ODE. The non-linear part in this solution is processed by Taylor Expansion. Through higher order expansion, the approximation error in the solution is significantly reduced, thus achieving impressive performance on both text-to-image sampling and image/video inversion.
Based on RF-Solver, we further propose the RF-Edit for image and video editing. RF-Edit framework leverages the features from inversion in the denoising process, which enables high-quality editing while preserving the structural information of source image/video. RF-Edit contains two sub-modules, especially for image editing and video editing.
The environment of our code is the same as FLUX, you can refer to the official repo of FLUX, or running the following command to construct the environment.
conda create --name RF-Solver-Edit python=3.10
conda activate RF-Solver-Edit
pip install -e ".[all]"
We have provided several scripts to reproduce the results in the paper, mainly including 3 types of editing: Stylization, Adding, Replacing. We suggest to run the experiment on a single A100 GPU.
Ref Style | |||
Editing Scripts | Trump | Marilyn Monroe | Einstein |
Edtied image | |||
Editing Scripts | Biden | Batman | Herry Potter |
Edtied image |
Source image | |||
Editing Scripts | + hiking stick | horse -> camel | + dog |
Edtied image |
We provide the gradio demo for image editing, which is also available on our π€ Huggingface Space! You can also run the gradio demo on your own device using the following command:
cd src
python gradio_demo.py
Here is an example of using the gradio demo to edit an image! Note that here "Number of inject steps" means the steps of feature sharing in RF-Edit, which is highly related to the quality of edited results. We suggest tuning this parameter, and selecting the results with the best visual quality.
You can also run the following scripts to edit your own image.
cd src
python edit.py --source_prompt [describe the content of your image or leave it as null] \
--target_prompt [describe your editing requirements] \
--guidance 2 \
--source_img_dir [the path of your source image] \
--num_steps 30 \
--inject [typically set to a number between 2 to 8] \
--name 'flux-dev' --offload \
--output_dir [output path]
Similarly, The --inject
refers to the steps of feature sharing in RF-Edit, which is highly related to the performance of editing.
If you find our work helpful, please star π this repo and cite π our paper. Thanks for your support!
@article{wang2024taming,
title={Taming Rectified Flow for Inversion and Editing},
author={Wang, Jiangshan and Pu, Junfu and Qi, Zhongang and Guo, Jiayi and Ma, Yue and Huang, Nisha and Chen, Yuxin and Li, Xiu and Shan, Ying},
journal={arXiv preprint arXiv:2411.04746},
year={2024}
}
We thank FLUX for their clean codebase.
The code in this repository is still being reorganized. Errors that may arise during the organizing process could lead to code malfunctions or discrepancies from the original research results. If you have any questions or concerns, please send emails to wjs23@mails.tsinghua.edu.cn.