IJCAI 2023, Paper
Diffusion models have emerged as a powerful paradigm in video synthesis tasks including prediction, generation, and interpolation. Due to the limitation of the computational budget, existing methods usually implement conditional diffusion models with an autoregressive inference pipeline, in which the future fragment is predicted based on the distribution of adjacent past frames. However, only the conditions from a few previous frames can't capture the global temporal coherence, leading to inconsistent or even outrageous results in long-term video prediction.
In this paper, we propose a Local-Global Context guided Video Diffusion model (LGC-VD) to capture multi-perception conditions for producing high-quality videos in both conditional/unconditional settings. In LGC-VD, the UNet is implemented with stacked residual blocks with self-attention units, avoiding the undesirable computational cost in 3D Conv. We construct a local-global context guidance strategy to capture the multi-perceptual embedding of the past fragment to boost the consistency of future prediction. Furthermore, we propose a two-stage training strategy to alleviate the effect of noisy frames for more stable predictions. Our experiments demonstrate that the proposed method achieves favorable performance on video prediction, interpolation, and unconditional video generation.
From top to bottom: Ground Truth (Top Row), MCVD spatin (Second Row), MCVD concat (Third Row), Our Method (Bottom Row).
From top to bottom: Ground Truth (Top Row), MCVD spatin (Second Row), MCVD concat (Third Row), Our Method (Bottom Row).
Our unconditional generation results on BAIR.
Our video infilling results on BAIR.
conda create -n lgc_vd python=3.8
pip install -r requirements.txt
You can handle the dataset in the following way, or you can directly download the preprocessed dataset from https://github.com/voletiv/mcvd-pytorch.
- Download Cityscapes video dataset (
leftImg8bit_sequence_trainvaltest.zip
(324GB)) from https://www.cityscapes-dataset.com/ - Convert it to HDF5 format, and save in
/path/to/Cityscape_hdf5
:python datasets/cityscapes_convert.py --leftImg8bit_sequence_dir '/path/to/Cityscapes/leftImg8bit_sequence' --image_size 128 --out_dir '/path/to/Cityscapes_hdf5'
- Download BAIR video dataset (
BAIR Robot Pushing Dataset (low resolution)
) from https://sites.google.com/berkeley.edu/robotic-interaction-datasets - Convert it to HDF5 format, and save in
/path/to/BAIR_hdf5
:python datasets/bair_convert.py --bair_dir '/path/to/bair' --out_dir '/path/to/bair_hdf5'
You can train on Cityscapes with 4 GPUs using:
python main.py --config configs/cityscape.yml
The pretrained checktpoints can be found here:https://drive.google.com/drive/folders/1fFEVho-OsCJTZ5_P4mEZvzbv0U1Y1KeK?usp=sharing
You can change the task in cityscape_test.yml
, and the optional tasks include video prediction, video interpolation and video generation.
python main.py --config configs/cityscape_test.yml
The code will calculate PSNR, SSIM, LPIPS and FVD.
python eva.py
Our work is based on https://github.com/lucidrains/denoising-diffusion-pytorch
If you find this project helpful for your research, please consider citing the following BibTeX entry.
@misc{yang2023video,
title={Video Diffusion Models with Local-Global Context Guidance},
author={Siyuan Yang and Lu Zhang and Yu Liu and Zhizhuo Jiang and You He},
year={2023},
eprint={2306.02562},
archivePrefix={arXiv},
primaryClass={cs.CV}
}