SkillAgentSearch skills...

Monkey

Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)

Install / Use

/learn @Yuliang-Liu/Monkey
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="https://v1.ax1x.com/2024/08/13/7GXwAh.png" width="500" style="margin-bottom: 0.2;"/> <p> <h3 align="center"> <a href="https://arxiv.org/abs/2311.06607">Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models</a></h3> <h2></h2> <h5 align="center"> Please give us a star ⭐ for the latest update. </h5> <h5 align="center">

arXiv License GitHub issues GitHub closed issues <br>

</h5>

[CVPR 2024] Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models<br> Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, Xiang Bai <br> arXiv Source_code Detailed Caption Model Weight Model Weight in Wisemodel

[TPAMI 2026] TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document<br> Yuliang Liu, Biao Yang, Qiang Liu, Zhang Li, Zhiyin Ma, Shuo Zhang, Xiang Bai <br> arXiv Source_code Data Model Weight

[NeurIPS 2024] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks<br> Xingkui Zhu, Yiran Guan, Dingkang Liang, Yuchao Chen, Yuliang Liu, Xiang Bai <br> arXiv Source_code

[ICLR 2025] Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models<br> Mingxin Huang, Yuliang Liu, Dingkang Liang, Lianwen Jin, Xiang Bai <br> arXiv Source_code Model Weight in Wisemodel Model Weight

[IJCV 2025] Liquid: Language Models are Scalable and Unified Multi-modal Generators<br> Junfeng Wu, Yi Jiang, Chuofan Ma, Yuliang Liu, Hengshuang Zhao, Zehuan Yuan, Song Bai, Xiang Bai<br> arXiv Source_code

[ICCV 2025] LIRA: Inferring Segmentation in Large Multi-modal Models with Local Interleaved Region Assistance<br> Zhang Li, Biao Yang, Qiang Liu, Shuo Zhang, Zhiyin Ma, Shuo Zhang, Liang Yin, Linger Deng, Yabo Sun, Yuliang Liu, Xiang Bai<br> arXiv Source_code

MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm<br> Zhang Li, Yuliang Liu, Qiang Liu, Zhiyin Ma, Ziyang Zhang, Shuo Zhang, Zidun Guo, Jiarui Zhang, Xinyu Wang, Xiang Bai<br> arXiv Source_code Model Weight Demo

News

  • 2025.6.6 🚀 MonkeyOCR: Try our document parsing model — Accurate, Fast, and Easy to Use.
  • 2025.4.17 🚀 Liquid: Bridging Text‑to‑Image and Image‑to‑Text in One Framework.
  • 2024.8.6 🚀 We release the paper Mini-Monkey.
  • 2024.4.5 🚀 Monkey is nominated as CVPR 2024 Highlight paper.
  • 2024.3.8 🚀 We release the paper TextMonkey.
  • 2024.1.3 🚀 Release the basic data generation pipeline. Data Generation
  • 2023.11.06 🚀 We release the paper Monkey.

🐳 Model Zoo

Monkey-Chat | Model|Language Model|Transformers(HF) |MMBench-Test|CCBench|MME|SeedBench_IMG|MathVista-MiniTest|HallusionBench-Avg|AI2D Test|OCRBench| |---------------|---------|-----------------------------------------|---|---|---|---|---|---|---|---| |Monkey-Chat|Qwev-7B|🤗echo840/Monkey-Chat|72.4|48|1887.4|68.9|34.8|39.3|68.5|534| |Mini-Monkey|internlm2-chat-1_8b|Mini-Monkey|---|75.5|1881.9|71.3|47.3|38.7|74.7|802|

Environment

conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt

You can download the corresponding version of flash_attention from https://github.com/Dao-AILab/flash-attention/releases/ and use the following code to install:

pip install flash_attn-2.3.5+cu117torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl --no-build-isolation

Train

We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing finetune_ds_debug.sh for Monkey and finetune_textmonkey.sh for TextMonkey.

The json file used for Monkey training can be downloaded at Link.

Inference

Run the inference code for Monkey and Monkey-Chat:

python ./inference.py --model_path MODEL_PATH  --image_path IMAGE_PATH  --question "YOUR_QUESTION"

Demo

Demo is fast and easy to use. Simply uploading an image from your desktop or phone, or capture one directly. Demo_chat is also launched as an upgraded version of the original demo to deliver an enhanced interactive experience.

We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:

  1. Make sure you have configured the environment.
  2. You can choose to use the demo offline or online:
  • Offline:
    • Download the Model Weight.
    • Modify DEFAULT_CKPT_PATH="pathto/Monkey" in the demo.py file to your model weight path.
    • Run the demo using the following command:
    python demo.py
    
  • Online:
    • Run the demo and download model weights online with the following command:
    python demo.py -c echo840/Monkey 
    

For TextMonkey you can download the model weight from Model Weight and run the demo code:

python demo_textmonkey.py -c model_path

Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.
<br>

<p align="center"> <img src="https://v1.ax1x.com/2024/04/13/7yS2yq.jpg" width="666"/> <p> <br>

Before 31/1/2024, Monkey-chat achieved the fifth rank in the Multimodal Model category on OpenCompass. <br>

<p align="center"> <img src="https://v1.ax1x.com/2024/04/13/7yShXL.jpg" width="666"/> <p> <br>

Dataset

You can download the training and testing data used by monkey from Monkey_Data.

The json file used for Monkey training can be downloaded at Link.

The data from our multi-level description generation method is now open-sourced and available for download at Link. We already upload the images used in multi-level description. Examples:

<br> <p align="center"> <img src="https://v1.ax1
View on GitHub
GitHub Stars2.0k
CategoryDevelopment
Updated1d ago
Forks138

Languages

Python

Security Score

95/100

Audited on Mar 25, 2026

No findings