SkillAgentSearch skills...

DART

[EMNLP 2025 main πŸ”₯] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"

Install / Use

/learn @ZichenWen1/DART
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <h1 style="display: inline-block; margin: 0;">πŸš€Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More</h1> </div> <h4 align="center">

Zichen Wen<sup>1,2</sup>, Yifeng Gao<sup>1</sup>, Shaobo Wang<sup>1</sup>, Junyuan Zhang<sup>2</sup>, Qintong Zhang<sup>2,4</sup>, <br> Weijia Li<sup>3,2</sup>, Conghui He<sup>2βœ‰</sup>, Linfeng Zhang<sup>1βœ‰</sup>,

<sup>1</sup>Shanghai Jiao Tong University, <sup>2</sup>Shanghai AI Laboratory, <br> <sup>3</sup>Sun Yat-sen University, <sup>4</sup>Peking University

</h4> <div align="center">

arXiv GitHub issues GitHub Stars

</div>

πŸ”₯ News

  • 2025.10.13 πŸ€—πŸ€— We have released our latest work EPIC, an efficient framework for progressive consistency distillation in multimodal large language models!
  • 2025.10.10 πŸ€—πŸ€— We've released our latest work, VTC-Bench. Come test whether your token compression method really works!
  • 2025.08.30 πŸ€—πŸ€— We have seamlessly integrated DART into Qwen2.5-VL.
  • 2025.08.21 πŸ€—πŸ€— Our DART is accepted at EMNLP'25 main!
  • 2025.05.15 πŸ€—πŸ€— Our analytical work on token compression has been accepted as ACL'25 Finding!
  • 2025.03.19 πŸ€—πŸ€— The implementation and evaluation scripts for LLaVA-Next are now available
  • 2025.03.18 πŸ€—πŸ€— We have released the implementation of DART for Qwen2-VL, and now you can easily evaluate it using lmms-eval!
  • 2025.02.22 πŸ€—πŸ€— We release our latest work DART, a plug-and-play, training-free token reduction method that seamlessly integrates with efficient attention operators. Code is available!

πŸ‘€ Overview

<p align='center'> <img src='https://github.com/ZichenWen1/DART/blob/main/images/overview.png' alt='mask' width='1000px'> </p>

TLDR: We propose DART (Duplication-Aware Reduction of Tokens), a training-free method that prunes vision tokens based on duplication, achieving 88.9% token reduction and 1.99 speed-up while maintaining performance and compatibility with efficient attention operators.

πŸ›  Preparation

LLaVA

  1. Clone this repository.
git clone https://github.com/ZichenWen1/DART
cd DART
  1. Environment Setup and Preparation
 conda create -n DART python=3.10 -y
 conda activate DART
 pip install -e .
 pip install flash-attn --no-build-isolation
  1. Download Multimodal Benchmark

Please follow the detailed instruction in LLaVA-Evaluation.

Qwen2-VL

 conda create -n DART_Qwen2VL python=3.10 -y
 conda activate DART_Qwen2VL
 cd Qwen2-VL/transformers && pip install -e .
 pip install accelerate qwen-vl-utils[decord]
 pip install flash-attn --no-build-isolation
 cd ../../lmms-eval && pip install -e .

Qwen2.5-VL

pip install -U transformers==4.55.4

🎯 Usage

LLaVA

πŸ“– Script Templates

bash scripts/v1_5/eval/[Benchmark].sh [Reduction_Ratio] [Max_Num_Trunction]

🐝 Examples

CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh 0.778 128
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/pope.sh 0.778 128
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mme.sh 0.778 128

Qwen2-VL

🐝 Examples

cd Qwen2-VL
bash eval_scripts/lmms_eval.sh True [Reduction_Ratio]

Qwen2.5-VL

🐝 Examples

cd Qwen2_5-VL
bash eval_scripts/lmms_eval.sh True [Reduction_Ratio]

πŸ”‘ License

This project is released under the Apache 2.0 license.

πŸ“Œ Citation

Please consider citing our paper in your publications, if our findings help your research.

@article{wen2025stop,
  title={Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More},
  author={Wen, Zichen and Gao, Yifeng and Wang, Shaobo and Zhang, Junyuan and Zhang, Qintong and Li, Weijia and He, Conghui and Zhang, Linfeng},
  journal={arXiv preprint arXiv:2502.11494},
  year={2025}
}

@article{wen2025token,
  title={Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem?},
  author={Wen, Zichen and Gao, Yifeng and Li, Weijia and He, Conghui and Zhang, Linfeng},
  journal={arXiv preprint arXiv:2502.11501},
  year={2025}
}

πŸ‘ Acknowledgment

We extend our gratitude to the open-source efforts of LLaVA, Qwen2-VL, and lmms-eval.

πŸ“© Contact

For any questions about our paper or code, please email zichen.wen@outlook.com.

View on GitHub
GitHub Stars114
CategoryDevelopment
Updated14d ago
Forks1

Languages

Python

Security Score

95/100

Audited on Mar 23, 2026

No findings