This repository hosts the code for our paper Keypoint Message Passing for Video-based Person Re-Identification.
Working in progress...
-
Make sure conda is installed.
-
Create environment from file:
conda env create -f environment.yml
- Install torchreid
git clone https://github.com/KaiyangZhou/deep-person-reid.git
cd deep-person-reid/
pip install -r requirements.txt
python setup.py develop
-
Download MARS dataset and keypoints.
-
Organize the file tree as below:
KeypointMessagePassing
└── data
└── mars
├── info/
├── bbox_train/
├── bbox_test/
├── bbox_train_keypoints/
├── bbox_test_keypoints/
# training
CUDA_VISIBLE_DEVICES=0 python scripts/trainval_vginteract.py --cfg_file <prefix>/cfg.yaml --data.save_dir logs/<version_number>/ --data.sources ['marspose'] --data.targets ['marspose'] --train.max_epoch <epoch_number>
# testing
CUDA_VISIBLE_DEVICES=0 python scripts/trainval_vginteract.py --cfg_file logs/<version_number>/<time_stamp_and_machine_name>/cfg.yaml --model.resume logs/<version_number>/<time_stamp_and_machine_name>/model/model.pth.tar-<epoch_number> --test.evaluate
Note: cfg.yaml
contains the default hyper-parameters. The following flags override the defaut hyper-params.
@inproceedings{chen2021keypoint,
title={Keypoint Message Passing for Video-based Person Re-Identification},
author={Chen, Di and Doering, Andreas and Zhang, Shanshan and Yang, Jian and Gall, Juergen and Schiele, Bernt},
booktitle={AAAI},
year={2022}
}