OpenTAD is an open-source temporal action detection (TAD) toolbox based on PyTorch.
- [2024/07/25] 🔥 We rank 1st in the Action Recognition, Action Detection, and Audio-Based Interaction Detection tasks of the EPIC-KITCHENS-100 2024 Challenge, as well as 1st place in the Moment Queries task of the Ego4D 2024 Challenge! Code is released at CausalTAD (arxiv'24).
- [2024/07/07] 🔥 We support DyFADet (ECCV'24). Thanks to the authors's effort!
- [2024/06/14] We release version v0.3, which brings many new features and improvements.
- [2024/04/17] We release the AdaTAD (CVPR'24), which can achieve average mAP of 42.90% on ActivityNet and 77.07% on THUMOS14.
- Support SoTA TAD methods with modular design. We decompose the TAD pipeline into different components, and implement them in a modular way. This design makes it easy to implement new methods and reproduce existing methods.
- Support multiple TAD datasets. We support 9 TAD datasets, including ActivityNet-1.3, THUMOS-14, HACS, Ego4D-MQ, EPIC-Kitchens-100, FineAction, Multi-THUMOS, Charades, and EPIC-Sounds Detection datasets.
- Support feature-based training and end-to-end training. The feature-based training can easily be extended to end-to-end training with raw video input, and the video backbone can be easily replaced.
- Release various pre-extracted features. We release the feature extraction code, as well as many pre-extracted features on each dataset.
One Stage | Two Stage | DETR | End-to-End Training |
The detailed configs, results, and pretrained models of each method can be found in above folders.
Please refer to install.md for installation.
Please refer to data.md for data preparation.
Please refer to usage.md for details of training and evaluation scripts.
Please refer to changelog.md for update details.
All the things that need to be done in the future is in roadmap.md.
[Acknowledgement] This repo is inspired by OpenMMLab project, and we give our thanks to their contributors.
If you think this repo is helpful, please cite us:
@article{liu2025opentad,
title={OpenTAD: A Unified Framework and Comprehensive Study of Temporal Action Detection},
author={Liu, Shuming and Zhao, Chen and Zohra, Fatimah and Soldan, Mattia and Pardo, Alejandro and Xu, Mengmeng and Alssum, Lama and Ramazanova, Merey and Alcázar, Juan León and Cioppa, Anthony and Giancola, Silvio and Hinojosa, Carlos and Ghanem, Bernard},
journal={arXiv preprint arXiv:2502.20361},
year={2025}
}
If you have any questions, please contact: shuming.liu@kaust.edu.sa
.