Skip to content

OpenTAD is an open-source temporal action detection (TAD) toolbox based on PyTorch.

License

Notifications You must be signed in to change notification settings

sming256/OpenTAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenTAD: An Open-Source Temporal Action Detection Toolbox.

OpenTAD is an open-source temporal action detection (TAD) toolbox based on PyTorch.

🥳 What's New

📖 Major Features

  • Support SoTA TAD methods with modular design. We decompose the TAD pipeline into different components, and implement them in a modular way. This design makes it easy to implement new methods and reproduce existing methods.
  • Support multiple TAD datasets. We support 9 TAD datasets, including ActivityNet-1.3, THUMOS-14, HACS, Ego4D-MQ, EPIC-Kitchens-100, FineAction, Multi-THUMOS, Charades, and EPIC-Sounds Detection datasets.
  • Support feature-based training and end-to-end training. The feature-based training can easily be extended to end-to-end training with raw video input, and the video backbone can be easily replaced.
  • Release various pre-extracted features. We release the feature extraction code, as well as many pre-extracted features on each dataset.

🌟 Model Zoo

One Stage Two Stage DETR End-to-End Training

The detailed configs, results, and pretrained models of each method can be found in above folders.

🛠️ Installation

Please refer to install.md for installation.

📝 Data Preparation

Please refer to data.md for data preparation.

🚀 Usage

Please refer to usage.md for details of training and evaluation scripts.

📄 Updates

Please refer to changelog.md for update details.

🤝 Roadmap

All the things that need to be done in the future is in roadmap.md.

🖊️ Citation

[Acknowledgement] This repo is inspired by OpenMMLab project, and we give our thanks to their contributors.

If you think this repo is helpful, please cite us:

@article{liu2025opentad,
  title={OpenTAD: A Unified Framework and Comprehensive Study of Temporal Action Detection},
  author={Liu, Shuming and Zhao, Chen and Zohra, Fatimah and Soldan, Mattia and Pardo, Alejandro and Xu, Mengmeng and Alssum, Lama and Ramazanova, Merey and Alcázar, Juan León and Cioppa, Anthony and Giancola, Silvio and Hinojosa, Carlos and Ghanem, Bernard},
  journal={arXiv preprint arXiv:2502.20361},
  year={2025}
}

If you have any questions, please contact: shuming.liu@kaust.edu.sa.