SkillAgentSearch skills...

Yolov10

YOLOv10: Real-Time End-to-End Object Detection [NeurIPS 2024]

Install / Use

/learn @THU-MIG/Yolov10
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Latest Updates -- YOLOE: Real-Time Seeing Anything

Please check out our new release on YOLOE.

  • YOLOE code: https://github.com/THU-MIG/yoloe
  • YOLOE paper: https://arxiv.org/abs/2503.07465
<p align="center"> <img src="https://github.com/THU-MIG/yoloe/blob/main/figures/comparison.svg" width=70%> <br> Comparison of performance, training cost, and inference efficiency between YOLOE (Ours) and YOLO-Worldv2 in terms of open text prompts. </p>

YOLOE(ye) is a highly efficient, unified, and open object detection and segmentation model for real-time seeing anything, like human eye, under different prompt mechanisms, like texts, visual inputs, and prompt-free paradigm, with zero inference and transferring overhead compared with closed-set YOLOs.

<p align="center"> <img src="https://github.com/THU-MIG/yoloe/blob/main/figures/visualization.svg" width=96%> <br> </p> <details> <summary> <font size="+1">Abstract</font> </summary> Object detection and segmentation are widely employed in computer vision applications, yet conventional models like YOLO series, while efficient and accurate, are limited by predefined categories, hindering adaptability in open scenarios. Recent open-set methods leverage text prompts, visual cues, or prompt-free paradigm to overcome this, but often compromise between performance and efficiency due to high computational demands or deployment complexity. In this work, we introduce YOLOE, which integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model, achieving real-time seeing anything. For text prompts, we propose Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines pretrained textual embeddings via a re-parameterizable lightweight auxiliary network and enhances visual-textual alignment with zero inference and transferring overhead. For visual prompts, we present Semantic-Activated Visual Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches to bring improved visual embedding and accuracy with minimal complexity. For prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy. It utilizes a built-in large vocabulary and specialized embedding to identify all objects, avoiding costly language model dependency. Extensive experiments show YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost. Notably, on LVIS, with $3\times$ less training cost and $1.4\times$ inference speedup, YOLOE-v8-S surpasses YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6 $AP^b$ and 0.4 $AP^m$ gains over closed-set YOLOv8-L with nearly $4\times$ less training time. </details> <p></p> <p align="center"> <img src="https://github.com/THU-MIG/yoloe/blob/main/figures/pipeline.svg" width=96%> <br> </p>

YOLOv10: Real-Time End-to-End Object Detection

Official PyTorch implementation of YOLOv10. NeurIPS 2024.

<p align="center"> <img src="figures/latency.svg" width=48%> <img src="figures/params.svg" width=48%> <br> Comparisons with others in terms of latency-accuracy (left) and size-accuracy (right) trade-offs. </p>

YOLOv10: Real-Time End-to-End Object Detection.
Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding
arXiv <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> Hugging Face Spaces Hugging Face Spaces Hugging Face Spaces Transformers.js Demo LearnOpenCV Openbayes Demo

<details> <summary> <font size="+1">Abstract</font> </summary> Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance. </details>

Notes

  • 2024/05/31: Please use the exported format for benchmark. In the non-exported format, e.g., pytorch, the speed of YOLOv10 is biased because the unnecessary cv2 and cv3 operations in the v10Detect are executed during inference.
  • 2024/05/30: We provide some clarifications and suggestions for detecting smaller objects or objects in the distance with YOLOv10. Thanks to SkalskiP!
  • 2024/05/27: We have updated the checkpoints with class names, for ease of use.

UPDATES 🔥

View on GitHub
GitHub Stars11.3k
CategoryDevelopment
Updated4h ago
Forks1.2k

Languages

Python

Security Score

95/100

Audited on Mar 20, 2026

No findings