Yolov10
YOLOv10: Real-Time End-to-End Object Detection [NeurIPS 2024]
Install / Use
/learn @THU-MIG/Yolov10README
Latest Updates -- YOLOE: Real-Time Seeing Anything
Please check out our new release on YOLOE.
- YOLOE code: https://github.com/THU-MIG/yoloe
- YOLOE paper: https://arxiv.org/abs/2503.07465
YOLOE(ye) is a highly efficient, unified, and open object detection and segmentation model for real-time seeing anything, like human eye, under different prompt mechanisms, like texts, visual inputs, and prompt-free paradigm, with zero inference and transferring overhead compared with closed-set YOLOs.
<p align="center"> <img src="https://github.com/THU-MIG/yoloe/blob/main/figures/visualization.svg" width=96%> <br> </p> <details> <summary> <font size="+1">Abstract</font> </summary> Object detection and segmentation are widely employed in computer vision applications, yet conventional models like YOLO series, while efficient and accurate, are limited by predefined categories, hindering adaptability in open scenarios. Recent open-set methods leverage text prompts, visual cues, or prompt-free paradigm to overcome this, but often compromise between performance and efficiency due to high computational demands or deployment complexity. In this work, we introduce YOLOE, which integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model, achieving real-time seeing anything. For text prompts, we propose Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines pretrained textual embeddings via a re-parameterizable lightweight auxiliary network and enhances visual-textual alignment with zero inference and transferring overhead. For visual prompts, we present Semantic-Activated Visual Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches to bring improved visual embedding and accuracy with minimal complexity. For prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy. It utilizes a built-in large vocabulary and specialized embedding to identify all objects, avoiding costly language model dependency. Extensive experiments show YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost. Notably, on LVIS, with $3\times$ less training cost and $1.4\times$ inference speedup, YOLOE-v8-S surpasses YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6 $AP^b$ and 0.4 $AP^m$ gains over closed-set YOLOv8-L with nearly $4\times$ less training time. </details> <p></p> <p align="center"> <img src="https://github.com/THU-MIG/yoloe/blob/main/figures/pipeline.svg" width=96%> <br> </p>YOLOv10: Real-Time End-to-End Object Detection
Official PyTorch implementation of YOLOv10. NeurIPS 2024.
<p align="center"> <img src="figures/latency.svg" width=48%> <img src="figures/params.svg" width=48%> <br> Comparisons with others in terms of latency-accuracy (left) and size-accuracy (right) trade-offs. </p>YOLOv10: Real-Time End-to-End Object Detection.
Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding
<a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
Notes
- 2024/05/31: Please use the exported format for benchmark. In the non-exported format, e.g., pytorch, the speed of YOLOv10 is biased because the unnecessary
cv2andcv3operations in thev10Detectare executed during inference. - 2024/05/30: We provide some clarifications and suggestions for detecting smaller objects or objects in the distance with YOLOv10. Thanks to SkalskiP!
- 2024/05/27: We have updated the checkpoints with class names, for ease of use.
UPDATES 🔥
- 2024/06/01: Thanks to ErlanggaYudiPradana for the integration with C++ | OpenVINO | OpenCV
- 2024/06/01: Thanks to NielsRogge and AK for hosting the models on the HuggingFace Hub!
- 2024/05/31: Build yolov10-jetson docker image by youjiang!
- 2024/05/31: Thanks to mohamedsamirx for the integration with BoTSORT, DeepOCSORT, OCSORT, HybridSORT, ByteTrack, StrongSORT using BoxMOT library!
- 2024/05/31: Thanks to kaylorchen for the integration with rk3588!
- 2024/05/30: Thanks to eaidova for the integration with OpenVINO™!
- 2024/05/29: Add the gradio demo for running the models locally. Thanks to AK!
- 2024/05/27: Thanks to sujanshresstha for the integration with DeepSORT!
- 2024/05/26: Thanks to CVHub520 for the integration into X-AnyLabeling!
- 2024/05/26: Thanks to DanielSarmiento04 for integrate in c++ | ONNX | OPENCV!
- 2024/05/25: Add Transformers.js demo and onnx weights(yolov10n/s/m/[b](https://huggingface.co/onnx-co
