This repository contains a tensorflow implementation for the paper "Learning Dynamic Generator Model by Alternating Back-Propagation Through Time".
Project Page: (http://www.stat.ucla.edu/~jxie/DynamicGenerator/DynamicGenerator.html)
@article{DG,
author = {Xie, Jianwen and Gao, Ruiqi and Zheng, Zilong and Zhu, Song-Chun and Wu, Ying Nian},
title = {Learning Dynamic Generator Model by Alternating Back-Propagation Through Time},
journal={The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI)},
year = {2019}
}
- Python 2.7 or Python 3.3+
- Tensorflow r1.0+
- Scipy
- pillow
First, prepare your data into a folder, for example ./trainingVideo/dynamicTexture/fire
To train a model with dynamic texture fire:
$ python main_dyn_G.py --category fire --isTraining True
The training results will be saved in ./output_synthesis/fire/final_result
.
The learned models will be saved in ./output_synthesis/fire/model
.
$ python main_dyn_G.py --category fire --isTraining False --num_sections_in_test 4 --num_batches_in_test 2 --ckpt_name model.ckpt-2960
the 'num_sections_in_test' indicates the number of trucations of the synthesized video
the 'num_batches_in_test' indicates the number of independent synthesized videos
testing results will be saved in ./output_synthesis/fire/final_result_testing
.
For each category, the first one is the observed video, and the other three are synthesized videos generated by the learned model. The observed video is of 60 frames in length, while the two synthesized videos are of 120 frames in length.
First, prepare your data into a folder, for example ./trainingVideo/action_dataset/animal30_running
To train a model with dataset animal30_running:
$ python main_dyn_G_motion.py --category animal30_running --isTraining True
The training results will be saved in ./output_synthesis/animal30_running/final_result
.
The learned models will be saved in ./output_synthesis/animal30_running/model
.
$ python main_dyn_G_motion.py --category animal30_running --isTraining False --num_sections_in_test 2 --num_batches_in_test 2 --ckpt_name model.ckpt-6990
testing results will be saved in ./output_synthesis/animal30_running/final_result_testing
.
Synthesizing animal actions (animal action dataset). The first row shows the observed videos, while the second and third rows display two corresponding synthesized videos for each obcerved video. The number of frames of the observed video is less than that of the synthesized video in the experiment of synthesizing human actions.
Type 1: missing frames
$ python main_dyn_G_recovery.py --category ocean --isTraining True --training_mode incomplete --mask_type external --mask_file missing_frame_type.mat
Type 2: single region masks
$ python main_dyn_G_recovery.py --category ocean --isTraining True --training_mode incomplete --mask_type external --mask_file region_type.mat
Type 1: missing frames
$ python main_dyn_G_recovery.py --category ocean --isTraining True --training_mode incomplete --mask_type missingFrames
Type 2: single region masks
$ python main_dyn_G_recovery.py --category ocean --isTraining True --training_mode incomplete --mask_type randomRegion
The results will be saved in ./output_recovery/ocean/final_result
.
In each example, the first one is the occluded training video, and the second one is the recovered result.
$ python main_dyn_G_background_inpainting.py --category boats --isTraining True --training_mode incomplete --mask_type external --mask_file mask128.mat
The results will be saved in ./output_background_inpainting/boats/final_result
.
In each example, the first one is the original video, the second one is the result where the target object is removed by our algorithm. (Left) removing a walking person in front of fountain. (Right) removing a moving boat in the lake.
For any questions, please contact Jianwen Xie (jianwen@ucla.edu), Ruiqi Gao (ruiqigao@ucla.edu) and Zilong Zheng (zilongzheng0318@ucla.edu)