Skip to content

Latest commit

 

History

History
54 lines (39 loc) · 1.51 KB

README.md

File metadata and controls

54 lines (39 loc) · 1.51 KB

Learning a Generative Model for Multi-Step Human-Object Interactions from Videos

Teaser This paper receives Eurographics 2019 best paper honorable mention.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{wang2019learning,
  title={Learning a Generative Model for Multi-Step Human-Object Interactions from Videos},
  author={Wang, He and Pirk, S{\"o}ren and Yumer, Ersin and Kim, Vladimir G and Sener, Ozan and Sridhar, Srinath and Guibas, Leonidas J},
  booktitle={Computer Graphics Forum},
  volume={38},
  number={2},
  pages={367--378},
  year={2019},
  organization={Wiley Online Library}
}

Introduction

This is a tensorflow implementation of Action Plot RNN. The model generates action plots.

The repository includes:

  • Source code of Action Plot RNN.
  • Training code
  • Pre-trained weights
  • Sampling code for generating action plots

Requirements

  • Python 3.5
  • Tensorflow 1.3.0
  • tflearn
  • cPickle

Video Dataset

For those who are interested in the interaction videos, one can download our dataset via https://drive.google.com/drive/folders/1vBazEJhfXeAZ06xR1T2QbmnxVbbRaE5S?usp=sharing.

Training

# Train a new Action Plot model from scratch
python3 train.py

Generation

# Sampling action plots using a checkpoint
python3 sample.py --save_dir=/ckpts/ckpts_dir --obj_list="book phone bowl bottle cup orange"