ODISE
Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]
Install / Use
/learn @NVlabs/ODISEREADME
ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models
ODISE: Open-vocabulary DIffusion-based panoptic SEgmentation exploits pre-trained text-image diffusion and discriminative models to perform open-vocabulary panoptic segmentation. It leverages the frozen representation of both these models to perform panoptic segmentation of any category in the wild.
This repository is the official implementation of ODISE introduced in the paper:
Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models Jiarui Xu, Sifei Liu*, Arash Vahdat*, Wonmin Byeon, Xiaolong Wang, Shalini De Mello CVPR 2023 Highlight. (*equal contribution)
For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing.

Visual Results
<div align="center"> <img src="figs/github_vis_coco_0.gif" width="32%"> <img src="figs/github_vis_ade_0.gif" width="32%"> <img src="figs/github_vis_ego4d_0.gif" width="32%"> </div> <div align="center"> <img src="figs/github_vis_coco_1.gif" width="32%"> <img src="figs/github_vis_ade_1.gif" width="32%"> <img src="figs/github_vis_ego4d_1.gif" width="32%"> </div>Links
- Jiarui Xu's Project Page (with additional visual results)
- HuggingFace 🤗 Demo
- arXiv Page
Citation
If you find our work useful in your research, please cite:
@article{xu2023odise,
title={{Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models}},
author={Xu, Jiarui and Liu, Sifei and Vahdat, Arash and Byeon, Wonmin and Wang, Xiaolong and De Mello, Shalini},
journal={arXiv preprint arXiv:2303.04803},
year={2023}
}
Environment Setup
Install dependencies by running:
conda create -n odise python=3.9
conda activate odise
conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev
git clone git@github.com:NVlabs/ODISE.git
cd ODISE
pip install -e .
(Optional) install xformers for efficient transformer implementation: One could either install the pre-built version
pip install xformers==0.0.16
or build from latest source
# (Optional) Makes the build much faster
pip install ninja
# Set TORCH_CUDA_ARCH_LIST if running and building on different GPU types
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
# (this can take dozens of minutes)
Model Zoo
We provide two pre-trained models for ODISE trained with label or caption
supervision on COCO's entire training set.
ODISE's pre-trained models are subject to the Creative Commons — Attribution-NonCommercial-ShareAlike 4.0 International — CC BY-NC-SA 4.0 License terms.
Each model contains 28.1M trainable parameters.
The download links for these models are provided in the table below.
When you run the demo/demo.py or inference script for the very first time, it will also automatically download ODISE's pre-trained model to your local folder $HOME/.torch/iopath_cache/NVlabs/ODISE/releases/download/v1.0.0/.
Get Started
See Preparing Datasets for ODISE.
See Getting Started with ODISE for detailed instructions on training and inference with ODISE.
Demo
-
Integrated into Huggingface Spaces 🤗 using Gradio. Try out the web demo:
Important Note: When you run the demo/demo.py script for the very first time, besides ODISE's pre-trained models, it will also automaticlaly download the pre-trained models for Stable Diffusion v1.3 and CLIP, from their original sources, to your local directories $HOME/.torch/ and $HOME/.cache/clip, respectively.
The pre-trained models for Stable Diffusion and CLIP are subject to their original license terms from Stable Diffusion and CLIP, respectively.
-
To run ODISE's demo from the command line:
python demo/demo.py --input demo/examples/coco.jpg --output demo/coco_pred.jpg --vocab "black pickup truck, pickup truck; blue sky, sky"The output is saved in
demo/coco_pred.jpg. For more detailed options fordemo/demo.pysee Getting Started with ODISE. -
To run the Gradio demo locally:
python demo/app.py
Acknowledgement
Code is largely based on Detectron2, Stable Diffusion, Mask2Former, OpenCLIP and GLIDE.
Thank you, all, for the great open-source projects!
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
API
A learning and reflection platform designed to cultivate clarity, resilience, and antifragile thinking in an uncertain world.
research_rules
Research & Verification Rules Quote Verification Protocol Primary Task "Make sure that the quote is relevant to the chapter and so you we want to make sure that we want to have it identifie
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
