SkillAgentSearch skills...

Efficientteacher

A Supervised and Semi-Supervised Object Detection Library for YOLO Series

Install / Use

/learn @AlibabaResearch/Efficientteacher
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Efficient Teacher

English | 简体中文

PWC PWC PWC PWC

new [2023/03/14: We release pre-trained YOLOv5l on object365 and transfer learning recipe]

Efficient Teacher is created by the Alibaba and used for tuning of both supervised and semi-supervised object detection(SSOD) algorithms.For more details, please refer to our paper.

Based on the YOLOv5 open source project, Efficient Teacher uses YACS and the latest network design to restructure key modules, so that it can achieve supervised and semi-supervised training for YOLOv5, YOLOX, YOLOv6, YOLOv7, and YOLOv8 using a single algorithm library.

Why Efficient Teacher

<!-- <img src="assets/efficient_teacher.png" width='600' height='300' align=center> -->

efficient_teacher efficient_teacher_2 efficient_teacher_3 If you encounter difficulties due to domain differences between training data and actual deployment scenarios, high cost of business scenario data reflux, and high cost of labeling specific categories:

  • Efficient Teacher introduces semi-supervised object detection into practical applications, enabling users to obtain a strong generalization capability with only a small amount of labeled data and large amount of unlabeled data.
  • Efficient Teacher provides category and custom uniform sampling, which can quickly improve the network performance in actual business scenarios.
<!-- If you are a heavy user of YOLOv5: -->

If you are already familiar with the YOLOv5 open-source framework and have your own modified algorithm library (which is quite common in the development process of an applied algorithms engineer):

  • You can use the convert_pt_to_efficient.py script to convert YOLOv5 weights to Efficient weights
  • You can use the existing datasets and annotations tailored specifically for YOLOv5 without any format adjustment
  • With a simple modification of the YAML configuration file, you can convert the training network from YOLOv5 to YOLOX/YOLOv6/YOLOv7/YOLOv8 with the same verification indicators as YOLOv5, making it easier to understand whether the new network structure is really effective for your task.

Below are the results of the YOLOv5l trained using Efficient Teacher. We did not make any modicications to the YOLOv5l structure, but instead designed some training modules to help the network generate pseudo-labels for unlabeled data and learn effective information from these pseudo-labels. Efficient Teacher can improve the mAP<sup>val</sup> of standard YOLOv5l from 49.00 to 50.45 using unlabeled data on the COCO dataset.

MS-COCO SSOD additional

|Model |Dataset|size<br><sup>(pixels)|mAP<sup>val<br>0.5:0.95 |Speed<br><sup>V100<br>Pytorch<br>b32<br>FP32<br>(ms)|params<br><sup>(M) |FLOPs<br><sup>@640 (G) |--- |--- |--- |--- |--- |--- |---
|YOLOv5s<br>Supervised|train2017|640 | 37.2 |1.6 | 7.2 | 16.5 |YOLOv5s<br>Efficient Teacher |train2017 + unlabeled2017|640 | 38.1(+0.9) |1.6 |7.2 |16.5 |YOLOv5m<br>Supervised|train2017|640|45.4|4.8|21.17|48.97 |YOLOv5m<br>Efficient Teacher |train2017 + unlabeled2017|640 | 46.4(+1.0) |4.8 |21.17 |48.97 |YOLOv5l<br>Supervised|train2017|640 | 49.00 |6.2 |46.56 |109.59 |YOLOv5l<br>Efficient Teacher |train2017 + unlabeled2017|640 | 50.45(+1.45) |6.2 |46.56 |109.59

MS-COCO SSOD standard

|Model |Dataset|size<br><sup>(pixels)|mAP<sup>val<br>0.5:0.95 |Speed<br><sup>V100<br>Pytorch<br>b32<br>FP32<br>(ms)|params<br><sup>(M) |FLOPs<br><sup>@640 (G) |--- |--- |--- |--- |--- |--- |---
|YOLOv5l<br>Supervised|1% labeled|640 | 9.91 |6.2 |46.56 |109.59 |YOLOv5l<br>Efficient Teacher |1% labeled|640 | 23.8 |6.2 |46.56 |109.59 |YOLOv5l<br>Supervised|2% labeled|640 | 14.01 |6.2 |46.56 |109.59 |YOLOv5l<br>Efficient Teacher|2% labeled|640 | 28.7 |6.2 |46.56 |109.59 |YOLOv5l<br>Supervised|5% labeled|640 | 23.75 |6.2 |46.56 |109.59 |YOLOv5l<br>Efficient Teacher|5% labeled|640 | 34.1 |6.2 |46.56 |109.59 |YOLOv5l<br>Supervised|10% labeled|640 | 28.45 |6.2 |46.56 |109.59 |YOLOv5l<br>Efficient Teacher|10% labeled|640 | 37.9 |6.2 |46.56 |109.59

We also provide variouss solutions implemented with supervised training. Below are the performance results of various detectors trained using the current library.

MS-COCO

|Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Precision<br><sup><br> |Recall<br><sup><br>|Speed<br><sup>V100<br>Pytorch<br>b32<br>FP32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (G) |--- |--- |--- |--- |--- |--- |--- |--- |--- |Nanodetm |320 |20.2 |33.4 |47.8 |33.7 |0.6 |0.9593 | 0.730 |YOLOv5n |320 |20.5 |34.6 |49.8 |33.3 |0.4 |1.87 | 1.12 |YOLOXn |320 |24.2 |38.4 |55.7 |36.5 |0.5 |2.02 | 1.39 |YOLOv6n |640 |34.4 |49.3 |61.1 |45.8 |0.9 |4.34 |11.26 |YOLOv5s |640 |37.2 |56.8 |68.1 |50.9 |1.6 |7.2 |16.5 |YOLOXs |640 |39.7 |59.6 |65.2 |56.0 |1.7 |8.04 |21.42 |YOLOv6t |640 |40.3 |56.5 |68.9 |50.5 |1.7 |9.72 |25.11 |YOLOv6s |640 |42.1 |58.6 |69.1 |52.5 |1.9 |17.22 |44.25 |YOLOv7s |640 |43.1 |60.1 |69.6 |55.3 |2.3 |8.66 |23.69 |YOLOv7s SimOTA |640 |44.5 |62.5 |71.8 |56.5 |2.4 |9.47 |28.48 |YOLOv5m | 640|45.4|64.1|72.4|57.6|4.8|21.17|48.97 |YOLOv5l |640 |49.0 |66.1 |74.2 |61 |6.2 |46.56 |109.59 |YOLOv5x |640 |50.7 |68.8 |74.2 |62.6 |10.7 |86.71 |205.67 |YOLOv7 |640 |51.5 |69.1 |72.6 |63.5 |6.8 |37.62 |106.47

Reproduce the COCO SSOD experimental results.

  • First, you need to download the images and labels of the COCO dataset and process them into the default format of YOLOv5 (which should be familiar to you).

    bash data/get_coco.sh
    
  • Organize downloaded pictures and annotation files in the following format.

    efficientteacher
      ├── data
      └── datasets
          └── coco  ← downloads here (20.1 GB)
               └── images
               └── labels
    
  • download train/val dataset list:

    bash data/get_label.sh
    
  • replace the "local_path" with your local path of the EfficientTeacher folder.

    CUR_PATH=$(pwd)
    sed -i "s#local_path#$CUR_PATH#" data/coco/train2017*.txt
    sed -i "s#local_path#$CUR_PATH#" data/coco/val2017.txt
    
  • If you don't have your own GPU open container environment, we recommend using the official container environment of Modelscope. We have verified all training and inference code in this environment.

    docker run registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.3.0-py37-torch1.11.0-tf1.15.5-1.3.0
    
  • COCO 10% labeled SSOD training

    export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
    
View on GitHub
GitHub Stars809
CategoryDevelopment
Updated14d ago
Forks125

Languages

Python

Security Score

95/100

Audited on Mar 18, 2026

No findings