Skip to content

Latest commit

 

History

History
62 lines (54 loc) · 10.6 KB

README.md

File metadata and controls

62 lines (54 loc) · 10.6 KB

InternVideo2 [Paper]

PWC
PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

This repo gives the code and models of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.

  • Achieved 92.1% Top1 accuracy in Kinetics 400.
  • Achieved SOTA performance on over 60 video/audio-related tasks (including action recognition, temporal localization, retrieval, etc) when released.

Updates

  • 2024/08/21: InternVideo2-Stage3-InternLM is released. Have a longer context window.
  • 2024/08/12: We provide smaller models, InternVideo2-S/B/L, which are distilled from InternVideo2-1B. We also build smaller VideoCLIP with MobileCLIP.
  • 2024/08/05: InternVideo2-Stage3-8B and InternVideo2-Stage3-8B-HD are released. 8B indicates the use of InternVideo2-1B and the 7B LLM.
  • 2024/07/10: The self-annotated audio-visual-speech video-text data and audio-visual video-text data from stage 2 are now available here. The training videos from stage 1 can be accessed here.
  • 2024/04/15: Update the code and scripts for InternVideo2 CLIP.
  • 2024/04/13: Update the code and scripts for InternVideo2 Stage1 & 2.
  • 2024/03/22: The technical report of InternVideo2 is released.

Citation

If this work is helpful for your research, please consider citing InternVideo.

@article{wang2024internvideo2,
  title={Internvideo2: Scaling video foundation models for multimodal video understanding},
  author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Wang, Chenting and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
  journal={arXiv preprint arXiv:2403.15377},
  year={2024}
}