This is the official code for paper "Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports (2024 WACV RWS Workshop)". Paper
- Clone this repo, and we'll call the directory that you cloned as {Deep-EIoU Root}
- Install dependencies.
conda create -n DeepEIoU python=3.7
conda activate DeepEIoU
# Install pytorch with the proper cuda version to suit your machine
# We are using torch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 with cuda==11.6
cd Deep-EIoU/reid
pip install -r requirements.txt
pip install cython_bbox
python setup.py develop
cd ..
To reproduce on the SportsMOT dataset, you need to download the detection and embedding files from drive
Please download these files and put them in the corresponding folder.
{Deep-EIoU Root}
|——————Deep-EIoU
└——————detection
| └——————v_-9kabh1K8UA_c008.npy
| └——————v_-9kabh1K8UA_c009.npy
| └——————...
└——————embedding
└——————v_-9kabh1K8UA_c008.npy
└——————v_-9kabh1K8UA_c009.npy
└——————...
Run the following commands, you should see the tracking result for each sequences in the interpolation folder. Please directly zip the tracking results and submit to the SportsMOT evaluation server.
python tools/sport_track.py --root_path <Deep-EIoU Root>
python tools/sport_interpolation.py --root_path <Deep-EIoU Root>
To demo on your custom dataset, download the detector and ReID model from drive and put them in the corresponding folder.
{Deep-EIoU Root}
└——————Deep-EIoU
└——————checkpoints
└——————best_ckpt.pth.tar (YOLOX Detector)
└——————sports_model.pth.tar-60 (OSNet ReID Model)
Demo on our provided video
python tools/demo.py
Demo on your custom video
python tools/demo.py --path <your video path>
If you find our work useful, please kindly cite our paper, thank you.
@inproceedings{huang2024iterative,
title={Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports},
author={Huang, Hsiang-Wei and Yang, Cheng-Yen and Sun, Jiacheng and Kim, Pyong-Kun and Kim, Kwang-Ju and Lee, Kyoungoh and Huang, Chung-I and Hwang, Jenq-Neng},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={163--172},
year={2024}
}
The code is based on ByteTrack, Torchreid and BoT-SORT, thanks for their wonderful work!
Hsiang-Wei Huang (hwhuang@uw.edu)