Torch implementation of our ECCV18 paper on video prediction based on one single still image.
In each panel from left to right: one single starting frame and the predicted sequence (next 16 frames).
git clone https://github.com/Yijunmaverick/FlowGrounded-VideoPrediction
cd FlowGrounded-VideoPrediction
-
Data
- Put the video data (e.g.,
.mp4
or.avi
) in a folder and put it under./datasets/DTexture/raw/
. - Run the following command to convert videos to frames and generate the metadata for training. The testing data are prepared in the same way. Make sure that the meta data for both training and testing are ready before experiments.
- Put the video data (e.g.,
cd datasets/
sh data_process.sh
cd ..
-
SPyNet
-
Pretrained models
- Run the following command to download the pretrained VGG (for perceptual loss) and our models learned on the
KTH
andWavingFlag
data for testing.
- Run the following command to download the pretrained VGG (for perceptual loss) and our models learned on the
sh download_models.sh
- Train the
3DcVAE
model for flow prediction:
th train_3DcVAE.lua --dataRoot datasets/DTexture
- Train the
flow2rgb
model for frame generation:
th train_flow2rgb.lua --dataRoot datasets/DTexture
- Test two steps (prediction + generation) together:
th test.lua --dataRoot datasets/DTexture
- With
ffmpeg
installed, run the following command to convert the predicted frames to a gif or video:
python gif.py
@inproceedings{Prediction-ECCV-2018,
author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
title = {Flow-Grounded Spatial-Temporal Video Prediction from Still Images},
booktitle = {European Conference on Computer Vision},
year = {2018}
}
- Codes are heavily borrowed from DrNet.