This is the implementation of our paper "Video Question Answering via Gradually Refined Attention over Appearance and Motion".
For our experiments, we create two VideoQA datasets named MSVD-QA and MSRVTT-QA. Both datasets are based on existing video description datasets. The QA pairs are generated from descriptions using this tool with additional processing steps. The corresponding videos can be found in base datasets which are MSVD and MSR-VTT. For MSVD-QA, youtube_mapping.txt may be needed to build the mapping of video names. The followings are some examples from the datasets.
We propose a model with gradually refined attention over appearance and motion in the video to tackle the VideoQA task. The architecture is presented below. Besides, we also compare the proposed model with three baseline models. Details can be found in the paper.
The code is written in pure python. Tensorflow is chosen to be the deep learning library here. The code uses two implementations of feature extraction networks which are VGG16 and C3D from the community.
- Ubuntu 14.04
- Python 3.6.0
- Tensorflow 1.3.0
-
Clone the repository to your local machine.
$ git clone https://github.com/xudejing/VideoQA.git
-
Download the VGG16 checkpoint and C3D checkpoint provided in corresponding repositories, then put them in directory
util
; Download the word embeddings trained over 6B tokens (glove.6B.zip) from GloVe and put the 300d file in directoryutil
. -
Install the python dependency packages.
$ pip install -r requirements.txt
The directory model
contains definition of four models. config.py
is the place to define the parameters of models and training process.
-
Preprocess the VideoQA datasets, for example:
$ python preprocess_msvdqa.py {dataset location}
-
Train, validate and test the models, for example:
$ python run_gra.py --mode train --gpu 0 --log log/evqa --dataset msvd_qa --config 0
(Note: you can pass
-h
to get help.) -
Visualize the training process using tensorboard, for example:
$ tensorboard --logdir log --port 8888
If you find this code useful, please cite the following paper:
@inproceedings{xu2017video,
title={Video Question Answering via Gradually Refined Attention over Appearance and Motion},
author={Xu, Dejing and Zhao, Zhou and Xiao, Jun and Wu, Fei and Zhang, Hanwang and He, Xiangnan and Zhuang, Yueting},
booktitle={ACM Multimedia}
year={2017}
}