Ting Liu1* ,
Liangtao Shi2*,
Richang Hong2,
Yue Hu1,
Quanjun Yin1✉️,
Linfeng Zhang3✉️
1National University of Defense Technology, 2Hefei University of Technology,
3Shanghai Jiao Tong University
The vision tokens in multimodal large language models usually exhibit significant spatial and temporal redundancy and take up most of the input tokens, which harms their in ference efficiency. To solve this problem, some recent works were introduced to drop the unimportant tokens during in ference where the importance of each token is decided only by the information in either the vision encoding stage or the prefilling stage. In this paper, we propose Multi-stage Token Dropping (MustDrop) to measure the importance of each token from the whole lifecycle, including the vision encoding stage, prefilling stage, and decoding stage. Comparison of vision token dropping methods: (a) methods that only drop tokens during the vision encoding stage, i.e., PruMerge and ToMe, (b) methods that remove tokens limited to the prefilling phase, i.e., FastV and SparseVLM, and (c) our Mustdrop approach, which gradually removes invalid tokens during the vision encoding, prefilling, and decoding stages.
- Clone this repository.
git clone https://github.com/liuting20/MustDrop.git
cd MustDrop
- Install necessary package
conda create -n MustDrop python=3.10 -y
conda activate MustDrop
pip install -e .
- Download Multimodal Benchmark
Please follow the detailed instruction in LLaVA-Evaluation.
-
Download LLaVa and put it under ./liuhaotian/llava-v1.5-7b.
Specifically, --sparse
in script indicates whether to perform sparseness, while --global_thr
and --individual_thr
control the degree of token sparsity.
- Example for evaluating TextVQA results (192 tokens, global_thr = 0.001, individual_thr = 0.001):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
- Example for evaluating TextVQA results (128 tokens, global_thr = 0.0012, individual_thr = 0.001):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
- Example for evaluating TextVQA results (64 tokens, global_thr = 0.011, individual_thr = 0.01):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
This project is released under the Apache 2.0 license.
If you use MustDrop in your research, please cite our work by using the following BibTeX entry:
@article{liu2024multi,
title={Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model},
author={Liu, Ting and Shi, Liangtao and Hong, Richang and Hu, Yue and Yin, Quanjun and Zhang, Linfeng},
journal={arXiv preprint arXiv:2411.10803},
year={2024}
}
We extend our gratitude to the open-source efforts of LLaVA, SparseVLMs and VideoLLaVA.