YOLOX
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Install / Use
/learn @Megvii-BaseDetection/YOLOXREADME
Introduction
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.
This repo is an implementation of PyTorch version YOLOX, there is also a MegEngine implementation.
<img src="assets/git_fig.png" width="1000" >Updates!!
- 【2023/02/28】 We support assignment visualization tool, see doc here.
- 【2022/04/14】 We support jit compile op.
- 【2021/08/19】 We optimize the training process with 2x faster training and ~1% higher performance! See notes for more details.
- 【2021/08/05】 We release MegEngine version YOLOX.
- 【2021/07/28】 We fix the fatal error of memory leak
- 【2021/07/26】 We now support MegEngine deployment.
- 【2021/07/20】 We have released our technical report on Arxiv.
Benchmark
Standard Models.
|Model |size |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights | | ------ |:---: | :---: | :---: |:---: |:---: | :---: | :----: | |YOLOX-s |640 |40.5 |40.5 |9.8 |9.0 | 26.8 | github | |YOLOX-m |640 |46.9 |47.2 |12.3 |25.3 |73.8| github | |YOLOX-l |640 |49.7 |50.1 |14.5 |54.2| 155.6 | github | |YOLOX-x |640 |51.1 |51.5 | 17.3 |99.1 |281.9 | github | |YOLOX-Darknet53 |640 | 47.7 | 48.0 | 11.1 |63.7 | 185.3 | github |
<details> <summary>Legacy models</summary>|Model |size |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights | | ------ |:---: | :---: |:---: |:---: | :---: | :----: | |YOLOX-s |640 |39.6 |9.8 |9.0 | 26.8 | onedrive/github | |YOLOX-m |640 |46.4 |12.3 |25.3 |73.8| onedrive/github | |YOLOX-l |640 |50.0 |14.5 |54.2| 155.6 | onedrive/github | |YOLOX-x |640 |51.2 | 17.3 |99.1 |281.9 | onedrive/github | |YOLOX-Darknet53 |640 | 47.4 | 11.1 |63.7 | 185.3 | onedrive/github |
</details>Light Models.
|Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights | | ------ |:---: | :---: |:---: |:---: | :---: | |YOLOX-Nano |416 |25.8 | 0.91 |1.08 | github | |YOLOX-Tiny |416 |32.8 | 5.06 |6.45 | github |
<details> <summary>Legacy models</summary>|Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights | | ------ |:---: | :---: |:---: |:---: | :---: | |YOLOX-Nano |416 |25.3 | 0.91 |1.08 | github | |YOLOX-Tiny |416 |32.8 | 5.06 |6.45 | github |
</details>Quick Start
<details> <summary>Installation</summary>Step1. Install YOLOX from source.
git clone git@github.com:Megvii-BaseDetection/YOLOX.git
cd YOLOX
pip3 install -v -e . # or python3 setup.py develop
</details>
<details>
<summary>Demo</summary>
Step1. Download a pretrained model from the benchmark table.
Step2. Use either -n or -f to specify your detector's config. For example:
python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
or
python tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
Demo for video:
python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
</details>
<details>
<summary>Reproduce our results on COCO</summary>
Step1. Prepare COCO dataset
cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO
Step2. Reproduce our results on COCO by specifying -n:
python -m yolox.tools.train -n yolox-s -d 8 -b 64 --fp16 -o [--cache]
yolox-m
yolox-l
yolox-x
- -d: number of gpu devices
- -b: total batch size, the recommended number for -b is num-gpu * 8
- --fp16: mixed precision training
- --cache: caching imgs into RAM to accelarate training, which need large system RAM.
When using -f, the above commands are equivalent to:
python -m yolox.tools.train -f exps/default/yolox_s.py -d 8 -b 64 --fp16 -o [--cache]
exps/default/yolox_m.py
exps/default/yolox_l.py
exps/default/yolox_x.py
Multi Machine Training
We also support multi-nodes training. Just add the following args:
- --num_machines: num of your total training nodes
- --machine_rank: specify the rank of each node
Suppose you want to train YOLOX on 2 machines, and your master machines's IP is 123.123.123.123, use port 12312 and TCP.
On master machine, run
python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num_machines 2 --machine_rank 0
On the second machine, run
python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num_machines 2 --machine_rank 1
Logging to Weights & Biases
To log metrics, predictions and model checkpoints to W&B use the command line argument --logger wandb and use the prefix "wandb-" to specify arguments for initializing the wandb run.
python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o [--cache] --logger wandb wandb-project <project name>
yolox-m
yolox-l
yolox-x
An example wandb dashboard is available here
Others
See more information with the following command:
python -m yolox.tools.train --help
</details>
<details>
<summary>Evaluation</summary>
We support batch testing for fast evaluation:
python -m yolox.tools.eval -n yolox-s -c yolox_s.pth -b 64 -d 8 --conf 0.001 [--fp16] [--fuse]
yolox-m
yolox-l
yolox-x
- --fuse: fuse conv and bn
- -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
- -b: total batch size across on all GPUs
To reproduce speed test, we use the following command:
python -m yolox.tools.eval -n yolox-s -c yolox_s.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse
yolox-m
yolox-l
yolox-x
</details>
<details>
<summary>Tutorials</summary>
- Training on custom data
- Caching for custom data
- Manipulating training image size
- Assignment visualization
- Freezing model
