SkillAgentSearch skills...

AYolov2

No description available

Install / Use

/learn @j-marple-dev/AYolov2

README

AYOLOv2

License: GPL v3

<!-- ![GitHub Workflow Status](https://img.shields.io/github/workflow/status/j-marple-dev/AYolov2/format-lint-unittest) --> <!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->

All Contributors

<!-- ALL-CONTRIBUTORS-BADGE:END -->

The main goal of this repository is to rewrite the object detection pipeline with a better code structure for better portability and adaptability to apply new experimental methods. The object detection pipeline is based on Ultralytics YOLOv5.

What's inside of this repository

  1. YOLOv5 based portable model (model built with kindle)
  2. Model conversion (TorchScript, ONNX, TensorRT) support
  3. Tensor decomposition model with pruning optimization
  4. Stochastic Weight Averaging(SWA) support
  5. Auto search for NMS parameter optimization
  6. W&B support with model save and load functionality
  7. Representative Learning (Experimental)
  8. Distillation via soft teacher method (Experimental)
  9. C++ inference (WIP)
  10. AutoML - searching efficient architecture for the given dataset(incoming!)

Table of Contents

How to start

<details> <summary>Install</summary>

Using conda environment

git clone https://github.com/j-marple-dev/AYolov2.git
cd AYolov2
./run_check.sh init
# Equivalent to
# conda env create -f environment.yml
# pre-commit install --hook-type pre-commit --hook-type pre-push

Using docker

Building a docker image

./run_docker.sh build
# You can add build options
# ./run_docker.sh build --no-cache

Running the container

This will mount current repository directory from local disk to docker image

./run_docker.sh run
# You can add running options
# ./run_docker.sh run -v $DATASET_PATH:/home/user/dataset

Executing the last running container

./run_docker.sh exec
</details> <details open> <summary>Train a model</summary>
  • Example

    python3 train.py --model $MODEL_CONFIG_PATH --data $DATA_CONFIG_PATH --cfg $TRAIN_CONFIG_PATH
    # i.e.
    # python3 train.py --model res/configs/model/yolov5s.yaml --data res/configs/data/coco.yaml --cfg res/configs/cfg/train_config.yaml
    # Logging and upload trained weights to W&B
    # python3 train.py --model res/configs/model/yolov5s.yaml --wlog --wlog_name yolov5s
    
    <details> <summary>Prepare dataset</summary>
    • Dataset config file
    train_path: "DATASET_ROOT/images/train"
    val_path: "DATASET_ROOT/images/val"
    
    # Classes
    nc: 10  # number of classes
    dataset: "DATASET_NAME"
    names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light']  # class names
    
    • Dataset directory structure
      • One of labels or segments directory must exist.
      • Training label type(labels or segments) will be specified in the training config.
      • images and labels or segments must have a matching filename with .txt extension.
    DATASET_ROOT
    │
    ├── images
    │   ├── train
    │   └── val
    ├── labels
    │   ├── train
    │   └── val
    ├── segments
    │   ├── train
    │   └── val
    
    </details> <details> <summary>Training config</summary>
    • Default training configurations are defined in train_config.yaml.
    • You may want to change batch_size, epochs, device, workers, label_type along with your model, dataset, and training hardware.
    • Be cautious to change other parameters. It may affect training results.
    </details> <details> <summary>Model config</summary>
    • Model is defined by yaml file with kindle
    • Please refer to https://github.com/JeiKeiLim/kindle
    </details> <details> <summary>Multi-GPU training</summary>
    • Please use torch.distributed.run module for multi-GPU Training
    python3 -m torch.distributed.run --nproc_per_node $N_GPU train.py --model $MODEL_CONFIG_PATH --data $DATA_CONFIG_PATH --cfg $TRAIN_CONFIG_PATH
    
    - N_GPU: Number of GPU to use
    
    </details>
</details> <details open> <summary>Run a model validation</summary>
  • Validate from local weights
python3 val.py --weights $WEIGHT_PATH --data-cfg $DATA_CONFIG_PATH
  • You can pass W&B path to the weights argument.
python3 val.py --weights j-marple/AYolov2/179awdd1 --data-cfg $DATA_CONFIG_PATH
  • TTA (Test Time Augmentation)
python3 val.py --weights $WEIGHT_PATH --data-cfg $DATA_CONFIG_PATH --tta --tta-cfg $TTA_CFG_PATH
  • Validate with pycocotools (Only for COCO val2017 images) Future work: The val.py and val2.py should be merged together.
python3 val2.py --weights $WEIGHT_PATH --data $VAL_IMAGE_PATH --json-path $JSON_FILE_PATH
</details>

Pretrained models

| Name | W&B URL | img_size | mAP<sup>val<br>0.5:0.95</sup> | mAP<sup>val<br>0.5</sup> | params| |-------|---------------------------------------------------------------------------------------|---|----|----|----------| |YOLOv5s|<sub>j-marple/AYolov2/33cxs5tn</sub>|640|38.2|57.5| 7,235,389| |YOLOv5m|<sub>j-marple/AYolov2/2ktlek75</sub>|640|45.0|63.9|21,190,557| |YOLOv5l decomposed|<sub>j-marple/AYolov2/30t7wh1x</sub>|640|46.9|65.6|26,855,105| |YOLOv5l|<sub>j-marple/AYolov2/1beuv3fd</sub>|640|48.0|66.6|46,563,709| |YOLOv5x decomposed|<sub>j-marple/AYolov2/2pcj9mfh</sub>|640|49.2|67.6|51,512,570| |YOLOv5x|<sub>j-marple/AYolov2/1gxaqgk4</sub>|640|49.6|68.1|86,749,405|

</details>

Advanced usages

<details> <summary>Export model to TorchScript, ONNX, TensorRT</summary>
  • You can export a trained model to TorchScript, ONNX, or TensorRT

  • INT8 quantization is currently not supported (coming soon).

  • Usage

python3 export.py --weights $WEIGHT_PATH --type [torchscript, ts, onnx, tensorrt, trt] --dtype [fp32, fp16, int8]
  • Above command will generate both model and model config file.

    • Example) FP16, Batch size 8, Image size 640x640, TensorRT
      • model_fp16_8_640_640.trt
      • model_fp16_8_640_640_trt.yaml
      batch_size: 8
      conf_t: 0.001  # NMS confidence threshold
      dst: exp/  # Model location
      dtype: fp16  # Data type
      gpu_mem: 6  # GPU memory restriction
      img_height: 640
      img_width: 640
      iou_t: 0.65  # NMS IoU threshold
      keep_top_k: 100  # NMS top k parameter
      model_cfg: res/configs/model/yolov5x.yaml  # Base model config location
      opset: 11  # ONNX opset version
      rect: false  # Rectangular inference mode
      stride_size: 32  # Model stride size
      top_k: 512  # Pre-NMS top k parameter
      type: trt  # Model type
      verbose: 1  # Verbosity level
      weights: ./exp/yolov5x.pt  # Base model weight file location
      
  • Once, model has been exported, you can run val.py with the exported model.

    • ONNX inference is currently not supported.
    python3 val.py --weights model_fp16_8_640_640.trt --data-cfg $DATA_CONFIG_PATH
    
</details> <details> <summary>Applying tensor decomposition</summary>
  • A trained model can be compressed via tensor decomposition.

  • Decomposed conv is composed of 3 convolutions from 1 large convolution.

    • Example)
      • Original conv: 64x128x3x3
      • Decomposed conv: 64x32x1x1 -> 32x16x3x3 -> 16x128x1x1
  • Usage

    python3 decompose_model.py --weights $WEIGHT_PATH --loss-thr $DECOMPOSE_LOSS_THRESHOLD --prune-step $PRUNING_STEP --data-cfg $DATA_CONFIG_PATH
    
    ...
    [  Original] # param: 86,749,405, mAP0.5: 0.678784398716757, Speed(pre-process, inference, NMS): 0
    
View on GitHub
GitHub Stars156
CategoryEducation
Updated1y ago
Forks15

Languages

Python

Security Score

80/100

Audited on Aug 14, 2024

No findings