Skip to content

Files

Latest commit

cd3b39c · Aug 7, 2025

History

History

InternVideo2

README.md

InternVideo2 [Paper]

PWC
PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

This repo gives the code and models of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'. The training / testing about Stage 1 & 2 of InternVideo2 is given here while Stage 3 can be found in the repo of videochat2.

  • Achieved 92.1% Top1 accuracy in Kinetics 400.
  • Achieved SOTA performance on over 60 video/audio-related tasks (including action recognition, temporal localization, retrieval, etc) when released.

Updates

  • 2025/02/25: InternVideo2-Stage2-6B is released, try it!
  • 2024/08/21: InternVideo2-Stage3-InternLM is released. Have a longer context window.
  • 2024/08/12: We provide smaller models, InternVideo2-S/B/L, which are distilled from InternVideo2-1B. We also build smaller VideoCLIP with MobileCLIP. Training code is here.
  • 2024/08/05: InternVideo2-Stage3-8B and InternVideo2-Stage3-8B-HD are released. 8B indicates the use of InternVideo2-1B and the 7B LLM.
  • 2024/07/10: The self-annotated audio-visual-speech video-text data and audio-visual video-text data from stage 2 are now available here. The training videos from stage 1 can be accessed here.
  • 2024/04/15: Update the code and scripts for InternVideo2 CLIP.
  • 2024/04/13: Update the code and scripts for InternVideo2 Stage1 & 2.
  • 2024/03/22: The technical report of InternVideo2 is released.

Citation

If this work is helpful for your research, please consider citing InternVideo.

@article{wang2024internvideo2,
  title={Internvideo2: Scaling video foundation models for multimodal video understanding},
  author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Wang, Chenting and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
  journal={arXiv preprint arXiv:2403.15377},
  year={2024}
}