Divprune
[CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models
Install / Use
/learn @vbdi/DivpruneREADME
DivPrune
The repo for DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models. [Arxiv]
DivPrune is accepted to CVPR 2025 🎉.
Link to Huawei AI Gallery Notebook: [AI Gallery]
<div align="center"> <img src="./overview.jpg" alt="Our approach" width="50%"> </div>Setup Enviroment
conda create -n divprune python=3.10 -y
conda activate divprune
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
cd LLaVA
pip install -e .
cd ..
DivPrune
Main Results
You can use the following script to re-produce the results in the paper.
The default pretrained model is set to LLaVA 1.5 7b. Feel free to change the pre-trained model to get the results with other models.
The default retained ratio is set to 0.098. Adjust SUBSET_RATIO to get the results for other pruning ratios.
bash ./run_Divprune
TFLOPs
Use the following to get the TFLOP numbers reported in the paper.
python ./tflops.py
Efficiency
The following script calculated the memory and latency for DivPrune using LLaVA-1.6 model.
bash ./eval_time.sh
References:
The code is implemented based on lmms-eval, LLaVA and FASTV. We thank the contributors for their great work!
Citation
If this code is useful, please cite it in your documents.
@inproceedings{alvar2025divprune,
title={Divprune: Diversity-based visual token pruning for large multimodal models},
author={Alvar, Saeed Ranjbar and Singh, Gursimran and Akbari, Mohammad and Zhang, Yong},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={9392--9401},
year={2025}
}
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

