SkillAgentSearch skills...

VISA

[ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model

Install / Use

/learn @cilinyan/VISA
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

VISA: Reasoning Video Object Segmentation via Large Language Model

<font size=7><div align='center' >  GitHub stars arXiv Static Badge

</div></font> <div align=center> <img src="assert/architecture.png" style="width:100%;"> </div>

🚀 Performance

<div style="text-align: justify;"> VISA demonstrates remarkable proficiency in handling complex segmentation tasks that require: (a) reasoning based on world knowledge; (b) inference of future events; and (c) a comprehensive understanding of video content. </div> <div align=center> <img src="assert/performance.png" style="width:50%;"> </div>

🛠️ Installation

pip install -r requirements.txt
pip install flash-attn --no-build-isolation

🦄 Training and Validation

1. Training Data Preparation

Before training, please download the datasets, and then configure the path in dataset_config.py.

<details open> <summary> <strong>LISA's Dataset</strong> </summary>

Follow LISA to prepare LISA's datasets. The dataset folder should be stored in the $LISA_ROOT folder.

LISA_ROOT
├── ade20k
├── coco
├── cocostuff
├── llava_dataset
├── mapillary
├── reason_seg
├── refer_seg
└── vlpart
</details> <details open> <summary> <strong>Chat-UniVi's Dataset</strong> </summary>

Follow Chat-UniVi/Chat-UniVi-Instruct to prepare Chat-UniVi-Instruct datasets. The dataset folder should be stored in the $ChatUniVi_ROOT folder.

ChatUniVi_ROOT
├── Fine-tuning
│   ├── MIMIC_imageonly
│   └── VIDEO
└── ScienceQA_tuning
</details> <details open> <summary> <strong>RVOS's Dataset</strong> </summary>
  1. Reasoning Video Segmentation Datasets: ReVOS.
  2. Referring Video Segmentation Datasets: Ref-Youtube-VOS, Ref-DAVIS17, MeViS.
  3. Open-Vocabulary Video Instance Segmentation Dataset: LV-VIS. Download mask_dict.json and meta_expressions.json from OneDrive or BaiduPan. Then, put the annotations files in the $RVOS_ROOT/lvvis/train directory as follows.
RVOS_ROOT
├── ReVOS
│   ├── JPEGImages 
│   ├── mask_dict.json             
│   ├── mask_dict_foreground.json   
│   ├── meta_expressions_train_.json 
│   └── meta_expressions_valid_.json 
├── lvvis
│   └── train
|       ├── JPEGImages
|       ├── mask_dict.json
|       └── meta_expressions.json
├── Ref-Youtube-VOS
│   ├── meta_expressions
|   |   ├── train/meta_expressions.json
|   |   └── valid/meta_expressions.json
│   ├── train
|   |   ├── JPEGImages
|   |   └── mask_dict.pkl
│   └── valid
|       └── JPEGImages
├── davis17
│   ├── meta_expressions
|   |   ├── train/meta_expressions.json
|   |   └── valid/meta_expressions.json
│   ├── train
|   |   ├── JPEGImages
|   |   └── mask_dict.pkl
│   └── valid
|       ├── JPEGImages
|       └── mask_dict.pkl
└── mevis
</details>

2. Pre-trained weights

<details open> <summary> <strong>Chat-UniVi</strong> </summary>

To train VISA-7B or 13B, you need to download Chat-UniVi weights from Chat-UniVi-7B and Chat-UniVi-13B.

</details> <details open> <summary> <strong>SAM</strong> </summary>

Download SAM ViT-H pre-trained weights from the link.

</details>

3. Training VISA

# Training VISA-7B
bash scripts/train_7b.sh 

# Extracting fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints.
cd /PATH/TO/VISA-7B/ckpt_model && python zero_to_fp32.py . ../pytorch_model.bin

# Merge LoRA Weight
CUDA_VISIBLE_DEVICES="" python merge_lora_weights_and_save_hf_model.py \
  --version Chat-UniVi/Chat-UniVi \
  --weight /PATH/TO/VISA-7B/pytorch_model.bin \
  --save_path /PATH/TO/VISA-7B/hf_model

4. Validation

<details open> <summary> <strong>1. Using `VISA` to generate predicted mask of each video <a href="https://github.com/cilinyan/VISA/blob/main/scripts/val_7b_video.sh">[demo]</a></strong> </summary>
deepspeed --master_port=24999 train_ds.py \
  --version="/PATH/TO/VISA-7B/hf_model" \
  --vision_pretrained="/PATH/TO/sam_vit_h_4b8939.pth" \
  --log_base_dir="/PATH/TO/LOG_BASE_DIR" \
  --exp_name="val_7b" \
  --balance_sample \
  --dataset="reason_seg" \
  --sample_rates="13" \
  --val_dataset "revos_valid" \
  --eval_only 
</details> <details open> <summary> <strong>2. Using <a href="https://github.com/dvlab-research/LLaMA-VID">LLaMA-VID</a> to generate target frame for each video</a></strong> </summary>

You can directly download the results of our run from OneDrive or BaiduPan

  • Run http_server_mp.py to build the API server for LLaMA-VID [demo]

    python utils_llamavid/llamavid_server.py \
        --vision_tower /PATH/TO/eva_vit_g.pth \
        --image_processor /PATH/TO/openai/clip-vit-large-patch14 \
        --model-path /PATH/TO/YanweiLi/llama-vid-13b-full-224-video-fps-1
    
  • Using the API for inference [demo]

    python utils_llamavid/llamavid_client.py \
        --video_root /PATH/TO/ReVOS/JPEGImages \
        --data_json_file /PATH/TO/ReVOS/meta_expressions_valid_.json
    
</details> <details open> <summary> <strong>3. Using <a href="https://github.com/cilinyan/VISA/blob/main/XMem/tracking.py">XMem</a> for mask propagation <a href="https://github.com/cilinyan/VISA/blob/c53d2cd31407eab583c5eb04f84fd95b4694f2ce/XMem/tracking.py#L103-L110">[demo]</a> </strong> </summary> </details> <details open> <summary> <strong>4. Evaluate ReVOS's performance <a href="https://github.com/cilinyan/VISA/blob/main/tools/eval_revos.py#L74-L81">[demo]</a> </strong> </summary>
cd tools
python eval_revos.py /PATH/TO/FINAL_ANNOTATION [ARGS]
</details>

📑 Todo list

  • [x] Release code with Text-guided Frame Sampler's Local Sampling

  • [ ] Release VISA model weights issue #6

  • [ ] Release code with Text-guided Frame Sampler's Global-Local Sampling

⭐ Cite

If you find this project useful in your research, please consider citing:

@article{yan2024visa,
  title={VISA: Reasoning Video Object Segmentation via Large Language Models},
  author={Yan, Cilin and Wang, Haochen and Yan, Shilin and Jiang, Xiaolong and Hu, Yao and Kang, Guoliang and Xie, Weidi and Gavves, Efstratios},
  journal={arXiv preprint arXiv:2407.11325},
  year={2024}
}

🎖️ Acknowledgement

This work is built upon the LLaVA, SAM, LISA, Chat-UniVi, MeViS, LLaMA-VID and XMem.

View on GitHub
GitHub Stars210
CategoryContent
Updated3d ago
Forks8

Languages

Python

Security Score

80/100

Audited on Mar 25, 2026

No findings