Skip to content

Latest commit

 

History

History
216 lines (150 loc) · 13.7 KB

Readme.md

File metadata and controls

216 lines (150 loc) · 13.7 KB

SoccerNet TrackEval

Fork of the TrackEval library to support specific SoccerNet tasks and evaluation metrics, such as SoccerNet MOT and SoccerNet Game State Reconstruction.

This codebase provides code for a number of different tracking evaluation metrics (including the HOTA metrics), as well as supporting running all of these metrics on a number of different tracking benchmarks. Plus plotting of results and other things one may want to do for tracking evaluation.

To perform evaluation for SoccerNet Game State Reconstruction, please run the following command:

python3 ./scripts/run_soccernet_gs.py --GT_FOLDER path/to/dataset/SoccerNetGS --TRACKERS_FOLDER path/to/predictions_folder --TRACKER_SUB_FOLDER "" --SPLIT_TO_EVAL "test"

Where path/to/dataset/SoccerNetGS points to the dataset folder to be downloaded on the sn-gamestate repository, and path/to/predictions_folder points to a folder containing another folder with your predictions json files. This prediction folder should be named SoccerNetGS-test or SoccerNetGS-valid, where the suffix match the SPLIT_TO_EVAL config. The subfolder is the name of your tracker, it could be anything. Here is how the predictions folder should look like:

SoccerNetGS-test
└── tracklab  # use any name
    ├── SNGS-116.json
    ├── SNGS-117.json
    ├── SNGS-118.json
    ...
    ├── SNGS-199.json
    └── SNGS-200.json

Official Evaluation Code

The following benchmarks use TrackEval as their official evaluation code, check out the links to see TrackEval in action:

If you run a tracking benchmark and want to use TrackEval as your official evaluation code, please contact Jonathon (contact details below).

Currently implemented metrics

The following metrics are currently implemented:

Metric Family Sub metrics Paper Code Notes
HOTA metrics HOTA, DetA, AssA, LocA, DetPr, DetRe, AssPr, AssRe paper code Recommended tracking metric
CLEARMOT metrics MOTA, MOTP, MT, ML, Frag, etc. paper code
Identity metrics IDF1, IDP, IDR paper code
VACE metrics ATA, SFDA paper code
Track mAP metrics Track mAP paper code Requires confidence scores
J & F metrics J&F, J, F paper code Only for Seg Masks
ID Euclidean ID Euclidean paper code

Currently implemented benchmarks

The following benchmarks are currently implemented:

Benchmark Sub-benchmarks Type Website Code Data Format
RobMOTS Combination of 8 benchmarks Seg Masks website code format
Open World Tracking TAO-OW OpenWorld / Seg Masks website code format
MOTChallenge MOT15/16/17/20 2D BBox website code format
KITTI Tracking 2D BBox website code format
BDD-100k 2D BBox website code format
TAO 2D BBox website code format
MOTS KITTI-MOTS, MOTS-Challenge Seg Mask website code and code format
DAVIS Unsupervised Seg Mask website code format
YouTube-VIS Seg Mask website code format
Head Tracking Challenge 2D BBox website code format
PersonPath22 2D BBox website code format
BURST {Common, Long-tail, Open-world} Class-guided, {Point, Box, Mask} Exemplar-guided Seg Mask website format

HOTA metrics

This code is also the official reference implementation for the HOTA metrics:

HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking. IJCV 2020. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixe and Bastian Leibe.

HOTA is a novel set of MOT evaluation metrics which enable better understanding of tracking behavior than previous metrics.

For more information check out the following links:

Properties of this codebase

The code is written 100% in python with only numpy and scipy as minimum requirements.

The code is designed to be easily understandable and easily extendable.

The code is also extremely fast, running at more than 10x the speed of the both MOTChallengeEvalKit, and py-motmetrics (see detailed speed comparison below).

The implementation of CLEARMOT and ID metrics aligns perfectly with the MOTChallengeEvalKit.

By default the code prints results to the screen, saves results out as both a summary txt file and a detailed results csv file, and outputs plots of the results. All outputs are by default saved to the 'tracker' folder for each tracker.

Running the code

The code can be run in one of two ways:

  • From the terminal via one of the scripts here. See each script for instructions and arguments, hopefully this is self-explanatory.
  • Directly by importing this package into your code, see the same scripts above for how.

Quickly evaluate on supported benchmarks

To enable you to use TrackEval for evaluation as quickly and easily as possible, we provide ground-truth data, meta-data and example trackers for all currently supported benchmarks. You can download this here: data.zip (~150mb).

The data for RobMOTS is separate and can be found here: rob_mots_train_data.zip (~750mb).

The data for PersonPath22 is separate and can be found here: person_path_22_data.zip (~3mb).

The easiest way to begin is to extract this zip into the repository root folder such that the file paths look like: TrackEval/data/gt/...

This then corresponds to the default paths in the code. You can now run each of the scripts here without providing any arguments and they will by default evaluate all trackers present in the supplied file structure. To evaluate your own tracking results, simply copy your files as a new tracker folder into the file structure at the same level as the example trackers (MPNTrack, CIWT, track_rcnn, qdtrack, ags, Tracktor++, STEm_Seg), ensuring the same file structure for your trackers as in the example.

Of course, if your ground-truth and tracker files are located somewhere else you can simply use the script arguments to point the code toward your data.

To ensure your tracker outputs data in the correct format, check out our format guides for each of the supported benchmarks here, or check out the example trackers provided.

Evaluate on your own custom benchmark

To evaluate on your own data, you have two options:

  • Write custom dataset code (more effort, rarely worth it).
  • Convert your current dataset and trackers to the same format of an already implemented benchmark.

To convert formats, check out the format specifications defined here.

By default, we would recommend the MOTChallenge format, although any implemented format should work. Note that for many cases you will want to use the argument --DO_PREPROC False unless you want to run preprocessing to remove distractor objects.

Requirements

Code tested on Python 3.7.

  • Minimum requirements: numpy, scipy
  • For plotting: matplotlib
  • For segmentation datasets (KITTI MOTS, MOTS-Challenge, DAVIS, YouTube-VIS): pycocotools
  • For DAVIS dataset: Pillow
  • For J & F metric: opencv_python, scikit_image
  • For simples test-cases for metrics: pytest

use pip3 -r install requirements.txt to install all possible requirements.

use pip3 -r install minimum_requirments.txt to only install the minimum if you don't need the extra functionality as listed above.

Timing analysis

Evaluating CLEAR + ID metrics on Lift_T tracker on MOT17-train (seconds) on a i7-9700K CPU with 8 physical cores (median of 3 runs):

Num Cores TrackEval MOTChallenge Speedup vs MOTChallenge py-motmetrics Speedup vs py-motmetrics
1 9.64 66.23 6.87x 99.65 10.34x
4 3.01 29.42 9.77x 33.11x*
8 1.62 29.51 18.22x 61.51x*

*using a different number of cores as py-motmetrics doesn't allow multiprocessing.

python scripts/run_mot_challenge.py --BENCHMARK MOT17 --TRACKERS_TO_EVAL Lif_T --METRICS CLEAR Identity --USE_PARALLEL False --NUM_PARALLEL_CORES 1  

Evaluating CLEAR + ID metrics on LPC_MOT tracker on MOT20-train (seconds) on a i7-9700K CPU with 8 physical cores (median of 3 runs):

Num Cores TrackEval MOTChallenge Speedup vs MOTChallenge py-motmetrics Speedup vs py-motmetrics
1 18.63 105.3 5.65x 175.17 9.40x
python scripts/run_mot_challenge.py --BENCHMARK MOT20 --TRACKERS_TO_EVAL LPC_MOT --METRICS CLEAR Identity --USE_PARALLEL False --NUM_PARALLEL_CORES 1

License

TrackEval is released under the MIT License.

Contact

If you encounter any problems with the code, please contact Jonathon Luiten (luiten@vision.rwth-aachen.de). If anything is unclear, or hard to use, please leave a comment either via email or as an issue and I would love to help.

Dedication

This codebase was built for you, in order to make your life easier! For anyone doing research on tracking or using trackers, please don't hesitate to reach out with any comments or suggestions on how things could be improved.

Contributing

We welcome contributions of new metrics and new supported benchmarks. Also any other new features or code improvements. Send a PR, an email, or open an issue detailing what you'd like to add/change to begin a conversation.

Citing TrackEval

If you use this code in your research, please use the following BibTeX entry:

@misc{luiten2020trackeval,
  author =       {Jonathon Luiten, Arne Hoffhues},
  title =        {TrackEval},
  howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}},
  year =         {2020}
}

Furthermore, if you use the HOTA metrics, please cite the following paper:

@article{luiten2020IJCV,
  title={HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking},
  author={Luiten, Jonathon and Osep, Aljosa and Dendorfer, Patrick and Torr, Philip and Geiger, Andreas and Leal-Taix{\'e}, Laura and Leibe, Bastian},
  journal={International Journal of Computer Vision},
  pages={1--31},
  year={2020},
  publisher={Springer}
}

If you use any other metrics please also cite the relevant papers, and don't forget to cite each of the benchmarks you evaluate on.