FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation
Shuai Yang, Yifan Zhou, Ziwei Liu and Chen Change Loy
in CVPR 2024
Project Page | Paper | Supplementary Video | Input Data and Video Results
Abstract: The remarkable efficacy of text-to-image diffusion models has motivated extensive exploration of their potential application in video domains. Zero-shot methods seek to extend image diffusion models to videos without necessitating model training. Recent methods mainly focus on incorporating inter-frame correspondence into attention mechanisms. However, the soft constraint imposed on determining where to attend to valid features can sometimes be insufficient, resulting in temporal inconsistency. In this paper, we introduce FRESCO, intra-frame correspondence alongside inter-frame correspondence to establish a more robust spatial-temporal constraint. This enhancement ensures a more consistent transformation of semantically similar content across frames. Beyond mere attention guidance, our approach involves an explicit update of features to achieve high spatial-temporal consistency with the input video, significantly improving the visual coherence of the resulting translated videos. Extensive experiments demonstrate the effectiveness of our proposed framework in producing high-quality, coherent videos, marking a notable improvement over existing zero-shot methods.
Features:
- Temporal consistency: use intra-and inter-frame constraint with better consistency and coverage than optical flow alone.
- Compared with our previous work Rerender-A-Video, FRESCO is more robust to large and quick motion.
- Zero-shot: no training or fine-tuning required.
- Flexibility: compatible with off-the-shelf models (e.g., ControlNet, LoRA) for customized translation.
teasers.mp4
- [05/2024] The Diffusers pipeline is available: FRESCO Community Pipeline.
- [04/2024] Integrated to 🤗 Hugging Face. Enjoy the web demo!
- [03/2024] Paper is released.
- [03/2024] Code is released.
- [03/2024] This website is created.
-
Integrate into Diffusers -
Add Huggingface web demo -
Add webUI. -
Update readme -
Upload paper to arXiv, release related material
- Clone the repository.
git clone https://github.com/williamyang1991/FRESCO.git
cd FRESCO
-
You can simply set up the environment with pip based on requirements.txt
- Create a conda environment and install torch >= 2.0.0. Here is an example script to install torch 2.0.0 + CUDA 11.8 :
conda create --name diffusers python==3.8.5 conda activate diffusers pip install torch==2.0.0 torchvision==0.15.1 --index-url https://download.pytorch.org/whl/cu118
- Run
pip install -r requirements.txt
in an environment where torch is installed. - We have tested on torch 2.0.0/2.1.0 and diffusers 0.19.3
- If you use new versions of diffusers, you need to modify my_forward()
-
Run the installation script. The required models will be downloaded in
./model
,./src/ControlNet/annotator
and./src/ebsynth/deps/ebsynth/bin
.- Requires access to huggingface.co
python install.py
- You can run the demo with
run_fresco.py
python run_fresco.py ./config/config_music.yaml
- For issues with Ebsynth, please refer to issues
python webUI.py
The Gradio app also allows you to flexibly change the inference options. Just try it for more details.
Upload your video, input the prompt, select the model and seed, and hit:
- Run Key Frames: detect keyframes, translate all keyframes.
- Run Propagation: propagate the keyframes to other frames for full video translation
- Run All: Run Key Frames and Run Propagation
Select the model:
- Base model: base Stable Diffusion model (SD 1.5)
- Stable Diffusion 1.5: official model
- rev-Animated: a semi-realistic (2.5D) model
- realistic-Vision: a photo-realistic model
- flat2d-animerge: a cartoon model
- You can add other models on huggingface.co by modifying this line
We provide abundant advanced options to play with
Advanced options for single frame processing
- Frame resolution: resize the short side of the video to 512.
- ControlNet related:
- ControlNet strength: how well the output matches the input control edges
- Control type: HED edge, Canny edge, Depth map
- Canny low/high threshold: low values for more edge details
- SDEdit related:
- Denoising strength: repaint degree (low value to make the output look more like the original video)
- Preserve color: preserve the color of the original video
- SD related:
- Steps: denoising step
- CFG scale: how well the output matches the prompt
- Added prompt/Negative prompt: supplementary prompts
- FreeU related:
- FreeU first/second-stage backbone factor: =1 do nothing; >1 enhance output color and details
- FreeU first/second-stage skip factor: =1 do nothing; <1 enhance output color and details
Advanced options for FRESCO constraints
- Keyframe related
- Number of frames: Total frames to be translated
- Number of frames in a batch: To avoid out-of-memory, use small batch size
- Min keyframe interval (s_min): The keyframes will be detected at least every s_min frames
- Max keyframe interval (s_max): The keyframes will be detected at most every s_max frames
- FRESCO constraints
- FRESCO-guided Attention:
- spatial-guided attention: Check to enable spatial-guided attention
- cross-frame attention: Check to enable efficient cross-frame attention
- temporal-guided attention: Check to enable temporal-guided attention
- FRESCO-guided optimization:
- spatial-guided optimization: Check to enable spatial-guided optimization
- temporal-guided optimization: Check to enable temporal-guided optimization
- FRESCO-guided Attention:
- Background smoothing: Check to enable background smoothing (best for static background)
Advanced options for the full video translation
- Gradient blending: apply Poisson Blending to reduce ghosting artifacts. May slow the process and increase flickers.
- Number of parallel processes: multiprocessing to speed up the process. Large value (4) is recommended.
We provide a flexible script run_fresco.py
to run our method.
Set the options via a config file. For example,
python run_fresco.py ./config/config_music.yaml
We provide some examples of the config in config
directory.
Most options in the config is the same as those in WebUI.
Please check the explanations in the WebUI section.
We provide a separate Ebsynth python script video_blend.py
with the temporal blending algorithm introduced in
Stylizing Video by Example for interpolating style between key frames.
It can work on your own stylized key frames independently of our FRESCO algorithm.
video_blend.py [-h] [--output OUTPUT] [--fps FPS] [--key_ind KEY_IND [KEY_IND ...]] [--key KEY] [--n_proc N_PROC] [-ps] [-ne] [-tmp] name
positional arguments:
name Path to input video
optional arguments:
-h, --help show this help message and exit
--output OUTPUT Path to output video
--fps FPS The FPS of output video
--key_ind KEY_IND [KEY_IND ...]
key frame index
--key KEY The subfolder name of stylized key frames
--n_proc N_PROC The max process count
-ps Use poisson gradient blending
-ne Do not run ebsynth (use previous ebsynth output)
-tmp Keep temporary output
An example
python video_blend.py ./output/dog/ --key keys --key_ind 0 11 23 33 49 60 72 82 93 106 120 137 151 170 182 193 213 228 238 252 262 288 299 --output ./output/dog/blend.mp4 --fps 24 --n_proc 4 -ps
For the details, please refer to our previous work Rerender-A-Video (The mainly difference is the way of specifying key frame index)
more_result_1.mp4
If you find this work useful for your research, please consider citing our paper:
@inproceedings{yang2024fresco,
title = {FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation},
author = {Yang, Shuai and Zhou, Yifan and Liu, Ziwei and and Loy, Chen Change},
booktitle = {CVPR},
year = {2024},
}
The code is mainly developed based on Rerender-A-Video, ControlNet, Stable Diffusion, GMFlow and Ebsynth.