Skip to content

Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model

Notifications You must be signed in to change notification settings

liuting20/MustDrop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model

Ting Liu1* , Liangtao Shi2*, Richang Hong2, Yue Hu1,
Quanjun Yin1✉️, Linfeng Zhang3✉️

1National University of Defense Technology, 2Hefei University of Technology,
3Shanghai Jiao Tong University

mask

👀 Overview

The vision tokens in multimodal large language models usually exhibit significant spatial and temporal redundancy and take up most of the input tokens, which harms their in ference efficiency. To solve this problem, some recent works were introduced to drop the unimportant tokens during in ference where the importance of each token is decided only by the information in either the vision encoding stage or the prefilling stage. In this paper, we propose Multi-stage Token Dropping (MustDrop) to measure the importance of each token from the whole lifecycle, including the vision encoding stage, prefilling stage, and decoding stage. Comparison of vision token dropping methods: (a) methods that only drop tokens during the vision encoding stage, i.e., PruMerge and ToMe, (b) methods that remove tokens limited to the prefilling phase, i.e., FastV and SparseVLM, and (c) our Mustdrop approach, which gradually removes invalid tokens during the vision encoding, prefilling, and decoding stages.

image

👨 Preparation

  1. Clone this repository.
git clone https://github.com/liuting20/MustDrop.git
cd MustDrop
  1. Install necessary package
 conda create -n MustDrop python=3.10 -y
 conda activate MustDrop
 pip install -e .
  1. Download Multimodal Benchmark

Please follow the detailed instruction in LLaVA-Evaluation.

  1. Download LLaVa and put it under ./liuhaotian/llava-v1.5-7b.

    LLaVA-1.5

    LLaVA-Next

🎯 Usage

Specifically, --sparse in script indicates whether to perform sparseness, while --global_thr and --individual_thr control the degree of token sparsity.

  1. Example for evaluating TextVQA results (192 tokens, global_thr = 0.001, individual_thr = 0.001):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
  1. Example for evaluating TextVQA results (128 tokens, global_thr = 0.0012, individual_thr = 0.001):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
  1. Example for evaluating TextVQA results (64 tokens, global_thr = 0.011, individual_thr = 0.01):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh

License

This project is released under the Apache 2.0 license.

Citation

If you use MustDrop in your research, please cite our work by using the following BibTeX entry:

@article{liu2024multi,
  title={Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model},
  author={Liu, Ting and Shi, Liangtao and Hong, Richang and Hu, Yue and Yin, Quanjun and Zhang, Linfeng},
  journal={arXiv preprint arXiv:2411.10803},
  year={2024}
}

Acknowledgment

We extend our gratitude to the open-source efforts of LLaVA, SparseVLMs and VideoLLaVA.

About

Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published