Skip to content

Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".

License

Notifications You must be signed in to change notification settings

Gumpest/SparseVLMs

Repository files navigation

SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference

Yuan Zhang1,3* , Chun-Kai Fan1*, Junpeng Ma2*, Wenzhao Zheng3✉️, Tao Huang4, Kuan Cheng1,

Denis Gudovskiy5, Tomoyuki Okuno5, Yohei Nakata5, Kurt Keutzer3, Shanghang Zhang1✉️

1School of Computer Science, Peking University, 2Fudan University,

3UC Berkeley, 4The University of Sydney, 5Panasonic Holdings Corporation

📜 News

🔥 [2025/03/06] We released SparseVLM v1.5! Higher Accuracy, Flexible Pruning Manner, and Compatibility with FlashAttention 2!

🔥 [2024/10/15] We released SparseVLM and its Project Page! The Code is now open-source!

mask

✒️ Contents

👀 Overview

In vision-language models (VLMs), visual tokens usually consume a significant amount of computational overhead, despite their sparser information density compared to text tokens. To address this, existing methods extract more compact image representations by modifying the image encoder or projector. While some recent works further sparsify vision tokens during the decoding, they still ignore the guidance from the language tokens, which contradicts the multimodality paradigm. We argue that visual tokens should be sparsified adaptively based on the question prompt, as the model might focus on different parts (e.g., foreground or background) when dealing with various questions, as shown in Figure below. Unlike previous methods with text-agnostic visual sparsification (c) e.g., recent FastV, our SparseVLM (b) is guided by question prompts to select relevant visual patches.

image

👨‍💻 Preparation

  1. Clone this repository and navigate to SparseVLMs folder
git clone https://github.com/Gumpest/SparseVLMs.git
cd SparseVLMs
  1. Install necessary package
conda create -n SparseVLMs python=3.10 -y
conda activate SparseVLMs
pip install -e .
pip install transformers==4.37.0
pip install flash_attn==2.3.3
  1. Download Multimodal Benchmark

Please follow the detailed instruction in LLaVA-Evaluation.

🎯 Usage

Specifically, --retained_tokens in script indicates the number of tokens to be retained after the SparseVLM algorithm. It supports three numbers of tokens, including 192, 128, and 64. If a specific number of tokens is required, please make modifications in ./llava/model/language_model/score.py

  1. Example for evaluating MME results (default 192 tokens):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mme.sh
  1. Example for evaluating POPE results (default 192 tokens):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/pope.sh
  1. Example for evaluating ScienceQA results (default 192 tokens):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/sqa.sh
  1. Example for evaluating TextVQA results (default 192 tokens):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
  1. Example for evaluating MMBench results (default 192 tokens):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mmbench.sh

License

This project is released under the Apache 2.0 license.

Citation

If you use SparseVLM in your research, please cite our work by using the following BibTeX entry:

@article{zhang2024sparsevlm,
  title={SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference},
  author={Zhang, Yuan and Fan, Chun-Kai and Ma, Junpeng and Zheng, Wenzhao and Huang, Tao and Cheng, Kuan and Gudovskiy, Denis and Okuno, Tomoyuki and Nakata, Yohei and Keutzer, Kurt and others},
  journal={arXiv preprint arXiv:2410.04417},
  year={2024}
}

Acknowledgment

We extend our gratitude to the open-source efforts of TCFormer, LLaVA, MiniGemini and VideoLLaVA.

About

Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published