How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs
Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan
Mohamed bin Zayed University of AI, ETH Zurich, Australian National University, Technical University of Munich, LinkΓΆping University, Google
Official GitHub repository for the CVRR-Evaluation Suite
- (June, 2024)
- Added CVRR-ES evaluation results for Gemini Flash and GPT-4o models!
- (May 07, 2024)
- CVRR-ES evaluation dataset is released.
- Technical report for CVRR-ES benchmark is released.
- This repository supports inference codes for Video-LLaVA, TimeChat, Gemini-Pro-Vision (API) and GPT4-Vision (API) models.
Motivated by the expanding wide-scale applications of Video Large Multi-modal Models (Video-LMMs), and the lack of complex video understanding benchmarking, we present a new evaluation benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES) for Video-LMMs. CVRR-ES comprehensively evaluates the recent Video-LMMs against their reasoning capabilities over complex videos in the real-world context, and robustness of these models through the lens of user prompts as text queries.
Left: Our Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES) comprises 11 diverse video evaluation dimensions encompassing a variety of complex and real-world contexts for evaluating Video Large Multi-modal Models (Video-LMMs). Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions shown on the left.
Abstract: Recent advancements in Large Language Models (LLMs) have led to the development of Video Large Multi-modal Models (Video-LMMs) that can handle a wide range of video understanding tasks. These models have the potential to be deployed in real-world applications such as robotics, AI assistants, medical surgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our daily lives underscores the importance of ensuring and evaluating their robust performance in mirroring human-like reasoning and interaction capabilities in complex, real-world contexts. However, existing benchmarks for Video-LMMs primarily focus on general video comprehension abilities and neglect assessing their reasoning capabilities over complex videos in the real-world context, and robustness of these models through the lens of user prompts as text queries. In this paper, we present the Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), a novel benchmark that comprehensively assesses the performance of Video-LMMs across 11 diverse real-world video dimensions. We evaluate 9 recent models, including both open-source and closed-source variants, and find that most of the Video-LMMs, especially open-source ones, struggle with robustness and reasoning when dealing with complex videos. Based on our analysis, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique to enhance the performance of existing Video-LMMs. Our findings provide valuable insights for building the next generation of human-centric AI systems with advanced robustness and reasoning capabilities. Our dataset and code are publicly available.
CVRR-ES assesses the reasoning and robustness of Video-LMMs on complex videos in real-world contexts.
Main contributions:
- Complex Video Reasoning and Robustness Benchmark: We present CVRR-ES, an open-ended Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions.
- Comprehensive Evaluation: We evaluate recent Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries.
- Key Analysis: We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities.
- Dual-Step Contextual Prompting Technique: To improve Video-LMMs' reasoning and robustness abilities, we formulate a model-agnostic, training-free prompting technique that effectively enhances their performance on the CVRR-ES benchmark.
In below table, we present the evaluation results of 9 recent Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark.
We integrate DSCP technique with Video-LMMs and present results on the CVRR-ES benchmark in below Figure. DSCP improves the model's performance compared with models that use standard prompting (i.e., using only the question itself). Gains of DSCP technique are shown in green.
We study the contribution of each step of DSCP and compare it with the chain-of-thought prompting method. The results for the top 5 performing Video-LMMs are shown in the below Table.
Set up the CVRR-ES dataset by following the below steps.
- CVRR-ES dataset can be downloaded using this link (zipped). CVRR-ES benchmark consists of 2400 open-ended question-answer (QA) pairs spanning over 214 unique videos and covers 11 diverse evaluation dimensions. After unzipping, the CVRR-ES dataset structure looks like the following:
CVRR-ES/
|ββ interpretation_of_visual_context/
| |ββ annotations_interpretation_of_visual_context.json
| |ββ captions_interpretation_of_visual_context.json
| |ββ 163.mp4
| |ββ ... # remaining videos
|ββ partial_actions/
| |ββ annotations_partial_actions.json
| |ββ captions_partial_actions.json
| |ββ 121.mp4
| |ββ ... # remaining videos
|ββ unusual_and_physically_anomalous_activities/
| |ββ annotations_interpretation_of_visual_counusual_and_physically_anomalous_activities.json
| |ββ captions_unusual_and_physically_anomalous_activities.json
| |ββ 101.mp4
| |ββ ... # remaining videos
... # remaining video-evaluation dimension folders
Here, each folder corresponds to a single video evaluation dimension and contains annotations (QA pairs and captions) alongside videos.
Now note that videos utilized from Something-Something V2 Dataset (SSv2) have been not included in the zipped folder due to copyright policies. In order to complete the dataset, first:
-
Download SSv2 dataset from official website (it is publicly available). You will be prompted to register yourself by creating an account.
-
Identify the videos for CVRR-ES dataset by retrieving the videos with ids given in this text file.
-
Rename the videos following the mapping in the text file and add them to their respective evaluation dimension folder in the unzipped CVRR-ES folder.
To evaluate Video-LMMs on the CVRR-ES benchmark, please follow the following steps:
Follow the instructions in INSTALL.md to install packages and model weights required to run the sample Video-LMM codes for evaluation.
For each QA pair, we generate answers from Video-LMMs in an autoregressive manner. Predictions are generated using either standard prompting (i.e., question only) or using our Dual-Step Contextual Prompting technique (DSCP). Follow PREDICTIONS.md for sample codes for generating answers using TimeChat, Video-LLaVA, GPT4-Vision and Gemini-Vision-Pro.
Once the answer predictions are generated from step 1, we utilize LLM as a Judge to measure/quantify the correctness of Video-LMMs prediction for each question in the CVRR-Evaluation Suite. Please follow the instructions in LLM_SCORING.md for using LMM-Assisted evaluation.
The first version of the CVRR-ES dataset is already finalized. However, for additional reference, we are providing code snippets alongside LLM prompts that we used to generate the initial set of QA pairs.
Please refer to QA_GENERATION.md for instructions and sample code on generating question-answer pairs for CVRR-ES videos using LLM.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The videos in CVRR-ES dataset are collected from public academic benchmarks (refer to main paper for more details) and from YouTube and are for academic research use only. By using CVRR-ES, you agree not to use the dataset for any harm or unfair discrminiation. Please note that the data in this dataset may be subject to other agreements. Video copyrights belong to the original dataset providers, video creators, or platforms.
If you find our work and this repository useful, please consider giving our repo a star and citing our paper as follows:
@article{Khattak2024cvrres,
title={How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs},
author={khattak, Muhammad Uzair and Naeem, Muhammad Ferjad and Hassan, Jameel and Muzzamal, Naseer and Tombari, Federcio and Khan, Fahad Shahbaz and Khan, Salman},
journal={arXiv:2405.03690},
year={2024}
}
If you have any questions, please create an issue on this repository or contact at uzair.khattak@mbzuai.ac.ae.
This repository has borrowed Video-LMM evaluation code from TimeChat and LLaMA-VID. We also borrowed partial code from Video-ChatGPT repository. We thank the authors for releasing their code.