- Tensorflow 1.3 (or latest, although not tested)
- Preferably a Titan X for synthesizing 12 frames
- Appearance-stream tfmodel
- Dynamics-stream tfmodel
- Dynamic textures
- Static textures (for dynamics style transfer)
- Store the appearance-stream tfmodel in
./models
. - Store the dynamics-stream tfmodel in
./models
. The filepath to this model is your--dynamics_model
path.
python synthesize.py --type=dts --gpu=<NUMBER> --runid=<NAME> --dynamics_target=data/dynamic_textures/<FOLDER> --dynamics_model=models/<TFMODEL>
Store your chosen dynamic texture image sequence in a folder in /data/dynamic_textures
. This folder is your --dynamics_target
path.
python synthesize.py --type=dts --gpu=0 --runid="my_cool_fish" --dynamics_target=data/dynamic_textures/fish --dynamics_model=models/MSOEnet_ucf101train01_6e-4_allaug_exceptscale_randorder.tfmodel
python synthesize.py --type=dst --gpu=<NUMBER> --runid=<NAME> --dynamics_target=data/dynamic_textures/<FOLDER> --dynamics_model=models/<TFMODEL> --appearance_target=data/textures/<IMAGE>
Store your chosen static texture in ./data/textures
. The filepath to this texture is your --appearance_target
path.
python synthesize.py --type=dst --gpu=0 --runid="whoa_water!" --dynamics_target=data/dynamic_textures/water_4 --appearance_target=data/textures/water_paint_cropped.jpeg --dynamics_model=models/MSOEnet_ucf101train01_6e-4_allaug_exceptscale_randorder.tfmodel
python synthesize.py --type=inf --gpu=<NUMBER> --runid=<NAME> --dynamics_target=data/dynamic_textures/<FOLDER> --dynamics_model=models/<TFMODEL>
python synthesize.py --type=inc --gpu=<NUMBER> --runid=<NAME> --dynamics_target=data/dynamic_textures/<FOLDER> --dynamics_model=models/<TFMODEL> --appearance_target=data/textures/<IMAGE>
Store your chosen static texture in /data/textures
. The filepath to this texture is your --appearance_target
path. This texture should be the last frame of a previously generated sequence.
python synthesize.py --type=sta --gpu=<NUMBER> --runid=<NAME> --appearance_target=data/textures/<IMAGE>
Gatys et al.'s method of texture synthesis.
The network's output is saved at data/out/<RUNID>
.
Use ./useful_scripts/makegif.sh
to create a gif from a folder of images, e.g.,
./useful_scripts/makegif.sh "data/out/calm_water/iter_6000*" calm_water.gif
will create the gif calm_water.gif
from the images iter_6000*
in the calm_water
output folder.
Logs and snapshots are created and stored in ./logs/<RUNID>
and ./snapshots/<RUNID>
, respectively. You can view the loss progress for a particular run in Tensorboard.
@inproceedings{tesfaldet2018,
author = {Matthew Tesfaldet and Marcus A. Brubaker and Konstantinos G. Derpanis},
title = {Two-Stream Convolutional Networks for Dynamic Texture Synthesis},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2018}
}
Two-Stream Convolutional Networks for Dynamic Texture Synthesis Copyright (C) 2018 Matthew Tesfaldet
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
For questions, please contact Matthew Tesfaldet (mtesfald@eecs.yorku.ca).