Recurrent Attention Models for Depth-Based Person Identification [website] [arxiv] [pdf]
Albert Haque, Alexandre Alahi, Li Fei-Fei
CVPR 2016
-
Clone the repo:
git clone https://github.com/ahaque/ram_person_id.git
-
(Optional) If you are using GPU/CUDA:
luarocks install cutorch luarocks install cunn luarocks install cunnx
-
Install HDF5:
sudo apt install libhdf5-serial-dev hdf5-tools
-
Install more Lua packages (nn, dpnn, image, etc.):
./install_custom_rocks.sh
-
Confirm that the custom rocks were installed correctly by running torch and checking if
dp.Dpit
exists. From the bash/command line:$ th th> require 'dp' th> dp.Dpit table: 0x41d27e80 [0.0001s]
If you see an error, you may need to refresh the lua package cache. Run:
package.loaded
from the torch command line.
The datasets can be downloaded from the publishers' websites:
- DPI-T: Depth-Based Person Identification from Top View [website]
- BIWI: BIWI RGBD-ID Dataset [website]
- IAS: IAS-Lab RGBD-ID [website]
- IIT PAVIS: RGB-D person Re-Identification Dataset [website]
To automatically download the DPI-T dataset, run: ./download_datasets.sh
.
-
Navigate to src/encoder.
-
Train the encoder:
th train.lua th train.lua -gpuid 0
-
The model will be saved to
opt.dir
everyopt.save_interval
epochs.
Note: You must have a saved encoder model before training the recurrent attention model.
-
Navigate to the src folder. The file opts.lua contains the training options, model architecture definition, and optimization settings.
th train.lua th train.lua -gpuid 0
-
Done.
Haque, A., Alahi, A., Fei-Fei, L.: Recurrent attention models for depth-based person identification. CVPR, 2016.
Bibtex:
@inproceedings{haque2016cvpr,
author = {Haque, Albert and Alahi, Alexandre and Fei-Fei, Li},
title = {Recurrent Attention Models for Depth-Based Person Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}