Vary
[ECCV 2024] Official code implementation of Vary: Scaling Up the Vision Vocabulary of Large Vision Language Models.
Install / Use
/learn @Ucas-HaoranWei/VaryREADME
<a href="https://trendshift.io/repositories/5978" target="_blank"><img src="https://trendshift.io/api/badge/repositories/5978" alt="Ucas-HaoranWei%2FVary | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
Haoran Wei*, Lingyu Kong*, Jinyue Chen, Liang Zhao, Zheng Ge, Jinrong Yang, Jianjian Sun, Chunrui Han, Xiangyu Zhang
<p align="center"> <img src="assets/logo.jpg" style="width: 200px" align=center> </p>Release
- [2024/12/24] 🔥🔥🔥 My new work on system-2 perception is released slow-perception.
- [2024/9/03] 🔥🔥🔥 We release a very strong and comprehensive OCR model GOT-OCR2.0.
- [2024/7/16] 🎉🎉🎉 OneChart is accepted by ACM'MM 2024 oral (3.97%)!
- [2024/7/2] 🔥🔥🔥 Vary is accepted by ECCV2024. To thank everyone for their attention, I will release a model that performs on par with the Vary-document soon.
- [2024/5/27] 🔥🔥🔥 We present a document understanding benchmark in Fox .
- [2024/5/24] 🔥🔥🔥 We propose a multi-page document understanding work -- Fox, which supports 8-page pdf-image input !!!
- [2024/4/21] 🔥🔥🔥 For OneChart, we have released the web demo in Project Page. Have fun!!
- [2024/4/21] 🔥🔥🔥 We present a Vary-tiny LAVIS codebase (for training from scratch) and the Vary-600k dataset (300K English and 300K Chinese pages) here !!!
- [2024/4/15]🔥🔥🔥We release a chart parsing model OneChart here.
- [2024/4/12]🔥🔥🔥We will release a chart parsing model based on Vary-tiny next week. The model supports both English and Chinese charts.
- [2024/3/16]🔥🔥🔥I found many friends very interested in Vary-tiny(OPT-125M), so I opened source it here, a PDF-dense OCR and object detection version.
- [2023/1/23]🔥🔥🔥We release the Vary-toy here. Besides, we show the super good Vary-family results here.
- [2023/12/29]🔥🔥🔥We will release a new model (a small-size Vary, about 2B) at the beginning of next month and introduce a new feature (object detection). Our online demo will be temporarily closed to prepare for the deployment of the new model.
- [2023/12/11] We released the online demo, have fun!
- [2023/12/11] We released the codes of Vary (train and inference)!
Usage and License Notices: The data, code, and checkpoint are intended and licensed for research use only. They are also restricted to use that follow the license agreement of LLaMA, Vicuna, GPT-4, Qwen, and LLaVA.
Contents
Install
- Clone this repository and navigate to the Vary folder
git clone https://github.com/Ucas-HaoranWei/Vary.git
cd Vary
- Install Package
conda create -n vary python=3.10 -y
conda activate vary
pip install e .
- Install Flash-Attention
pip install ninja
pip install flash-attn --no-build-isolation
Vary Weights
- If you are in urgent need of weights for your research recently, please contact me by email.
- Download the CLIP-VIT-L in Hugging Face
- Here for Vary-toy weights.
Demo
-
Update the CLIP-VIT path in the codes (/cache/vit-large-patch14/) to your path.
python vary/demo/run_qwen_vary.py --model-name /vary/model/path/ --image-file /an/image/file.png
Train
- We currently do not plan to open source the weights of the intermediate.
- However, we release the train codes. So you can train on your own dataset. If you want to do this, you can try this:
- For Vary-base (one machine, if you have multiple machines you need to prepare your host file)
deepspeed Vary/train/train_qwen_vary.py --deepspeed /Vary/zero_config/zero2.json
--model_name_or_path /Qwen-7B/path/
--vision_tower /vit-large-patch14/path/
--freeze_vision_tower True
--freeze_lm_model False
--vision_select_layer -2
--use_im_start_end True
--bf16 True
--per_device_eval_batch_size 4
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 5000
--save_total_limit 1
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 1 --tf32 True
--model_max_length 4096
--gradient_checkpointing True
--dataloader_num_workers 4
--report_to none
--per_device_train_batch_size 4
--num_train_epochs 1
--learning_rate 5e-5
--datasets data_name1+data_name2+data_name3
--output_dir /path/to/output/
- For Vary-tiny
deepspeed Vary/train/train_opt.py --deepspeed /Vary/zero_config/zero2.json
--model_name_or_path /opt125m/path/
--conversation_version opt
--freeze_vision_tower False
--freeze_lm_model False
--use_im_start_end True
--bf16 True
--per_device_eval_batch_size 4
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 5000
--save_total_limit 1
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 1 --tf32 True
--model_max_length 4096
--gradient_checkpointing True
--dataloader_num_workers 4
--report_to none
--per_device_train_batch_size 16
--num_train_epochs 1
--learning_rate 5e-5
--datasets data_name1+data_name2+data_name3
--output_dir /path/to/output/
Contact
If you have any questions related to the code or the paper, feel free to email (weihaoran18@mails.ucas.ac.cn).
Acknowledgement
- LLaVA: the codebase we built upon!
- Qwen: the LLM base model of Vary, which is good at both English and Chinese!
Citation
If you find our work useful in your research, please consider citing Vary:
@article{wei2023vary,
title={Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yang, Jinrong and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2312.06109},
year={2023}
}
@article{wei2024small,
title={Small Language Model Meets with Reinforced Vision Vocabulary},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yu, En and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2401.12503},
year={2024}
}
Related Skills
node-connect
354.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
112.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
354.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
354.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
