SkillAgentSearch skills...

GroundingDINO

[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"

Install / Use

/learn @IDEA-Research/GroundingDINO

README

<div align="center"> <img src="./.asset/grounding_dino_logo.png" width="30%"> </div>

:sauropod: Grounding DINO

PWC PWC
PWC PWC

IDEA-CVR, IDEA-Research

Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang<sup>:email:</sup>.

[Paper] [Demo] [BibTex]

PyTorch implementation and pretrained models for Grounding DINO. For details, see the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.

:sun_with_face: Helpful Tutorial

<!-- Grounding DINO Methods | [![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/wxWDt5UiwY8) --> <!-- Grounding DINO Demos | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) --> <!-- [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) [![HuggingFace space](https://img.shields.io/badge/🤗-HuggingFace%20Space-cyan.svg)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/oEQYStnF2l8) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/C4NqaRBz_Kw) -->

:sparkles: Highlight Projects

<!-- Extensions | [Grounding DINO with Segment Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything); [Grounding DINO with Stable Diffusion](demo/image_editing_with_groundingdino_stablediffusion.ipynb); [Grounding DINO with GLIGEN](demo/image_editing_with_groundingdino_gligen.ipynb) --> <!-- Official PyTorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now! -->

:bulb: Highlight

  • Open-Set Detection. Detect everything with language!
  • High Performance. COCO zero-shot 52.5 AP (training without COCO data!). COCO fine-tune 63.0 AP.
  • Flexible. Collaboration with Stable Diffusion for Image Editting.

:fire: News

  • 2023/07/18: We release Semantic-SAM, a universal image segmentation model to enable segment and recognize anything at any desired granularity. Code and checkpoint are available!
  • 2023/06/17: We provide an example to evaluate Grounding DINO on COCO zero-shot performance.
  • 2023/04/15: Refer to CV in the Wild Readings for those who are interested in open-set recognition!
  • 2023/04/08: We release demos to combine Grounding DINO with GLIGEN for more controllable image editings.
  • 2023/04/08: We release demos to combine Grounding DINO with Stable Diffusion for image editings.
  • 2023/04/06: We build a new demo by marrying GroundingDINO with Segment-Anything named Grounded-Segment-Anything aims to support segmentation in GroundingDINO.
  • 2023/03/28: A YouTube video about Grounding DINO and basic object detection prompt engineering. [SkalskiP]
  • 2023/03/28: Add a demo on Hugging Face Space!
  • 2023/03/27: Support CPU-only mode. Now the model can run on machines without GPUs.
  • 2023/03/25: A demo for Grounding DINO is available at Colab. [SkalskiP]
  • 2023/03/22: Code is available Now!
<details open> <summary><font size="4"> Description </font></summary> <a href="https://arxiv.org/abs/2303.05499">Paper</a> introduction. <img src=".asset/hero_figure.png" alt="ODinW" width="100%"> Marrying <a href="https://github.com/IDEA-Research/GroundingDINO">Grounding DINO</a> and <a href="https://github.com/gligen/GLIGEN">GLIGEN</a> <img src="https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/GD_GLIGEN.png" alt="gd_gligen" width="100%"> </details>

:star: Explanations/Tips for Grounding DINO Inputs and Outputs

  • Grounding DINO accepts an (image, text) pair as inputs.
  • It outputs 900 (by default) object boxes. Each box has similarity scores across all input words. (as shown in Figures below.)
  • We defaultly choose the boxes whose highest similarities are higher than a box_threshold.
  • We extract the words whose similarities are higher than the text_threshold as predicted labels.
  • If you want to obtain objects of specific phrases, like the dogs in the sentence two dogs with a stick., you can select the boxes with highest text similarities with dogs as final outputs.
  • Note that each wor
View on GitHub
GitHub Stars9.9k
CategoryDevelopment
Updated2h ago
Forks1.0k

Languages

Python

Security Score

100/100

Audited on Mar 24, 2026

No findings