This is the official code of the CVPR 2018 PAPER.
CVPR 2018 PAPER | Project Page | Dataset
- Requirements:
- download our time-lapse dataset
- python2.7
- pytorch 0.3.0 or 0.3.1
- ffmpeg
- Testing:
- download our pretrained models
- run
python test.py --cuda --testf your_test_dataset_folder
- Sample outputs:
- in
./sample_outputs
there are mp4 files which are generated on my machine.
- in
@InProceedings{Xiong_2018_CVPR,
author = {Xiong, Wei and Luo, Wenhan and Ma, Lin and Liu, Wei and Luo, Jiebo},
title = {Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic
Generative Adversarial Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR)},
month = {June},
year = {2018}
}