Skip to content

[CVPR 2021] DyGLIP: A Dynamic Graph Model with Link Prediction for Accurate Multi-Camera Multiple Object Tracking

License

Notifications You must be signed in to change notification settings

uark-cviu/DyGLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DyGLIP: A dynamic graph model with link prediction for accurate multi-camera multiple object tracking

Authors: Kha Gia Quach, Pha Nguyen, Huu Le, Thanh-Dat Truong, Chi Nhan Duong, Minh-Triet Tran, Khoa Luu

Email: kquach@ieee.org, panguyen@uark.edu

Overview

We release the code for our paper in CVPR 2021. For more information please refer to our accepted paper in CVPR 2021.

Project Download

Firstly please download the project through:

git clone https://github.com/uark-cviu/DyGLIP

Prerequisites

The code requires the following libraries to be installed:

cd DyGLIP
conda env create -f environment.yml 
conda activate dyglip

Data Preparation

Please place all datasets in /data/:

/data/
├── CAMPUS
│   ├── Auditorium
│   ├── Garden1
│   ├── Garden2
│   └── Parkinglot
├── EPFL
│   ├── Basketball
│   ├── Campus
│   ├── Laboratory
│   ├── Passageway
│   └── Terrace
├── PETS09
├── MCT
│   ├── Dataset1
│   ├── Dataset2
│   ├── Dataset3
│   └── Dataset4
└── aic
    ├── S02
    └── S05

Step 1 - Detection

Please follow detection guidance to get bounding boxes prediction.

Get maskrcnn features by running the file Step1_Detection/identifier/preprocess/extract_img_and_feat.py, the output should be two files: bboxes.pkl and maskrcnn_feats.pkl.

Get pre-computed reid features by running the file Step1_Detection/identifier/preprocess/extract_img_and_reid_feat.py, the output should be the reid_feats.pkl file.

Step 2 - Graph-based Feature Extraction

This environment must be python 2.7, tensorflow 1.11

Prepare graph Step2_GraphFeature/prepare_graphs.py

Please follow GraphFeature guidance to train the model.

Step 3 - Matching

Get output from matching baselines graph and non-negative matrix factorization.

Acknowledgements

  • Thanks DySAT for providing strong baseline for graph attention network.
  • Thanks ELECTRICITY-MTMC for providing useful detection inference pipeline for MC-MOT.

Citation

If you find this code useful for your research, please consider citing:

@InProceedings{Quach_2021_CVPR,
    author    = {Quach, Kha Gia and Nguyen, Pha and Le, Huu and Truong, Thanh-Dat and Duong, Chi Nhan and Tran, Minh-Triet and Luu, Khoa},
    title     = {DyGLIP: A Dynamic Graph Model With Link Prediction for Accurate Multi-Camera Multiple Object Tracking},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {13784-13793}
}

Paper Linkage

About

[CVPR 2021] DyGLIP: A Dynamic Graph Model with Link Prediction for Accurate Multi-Camera Multiple Object Tracking

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages