SkillAgentSearch skills...

Lite.AI.toolkit

πŸ› A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.πŸŽ‰

Install / Use

/learn @xlite-dev/Lite.AI.toolkit

README

<div id="lite.ai.toolkit-Introduction"></div> <!-- ![logo-v3](https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/f99f5300-ece6-4572-8c4b-56b90e6e4d74) ![lite-ai-toolkit](https://github.com/user-attachments/assets/dc567d38-3fc4-4c9c-84de-3bfdf524aeab) -->

lite-ai-toolkit

<div align='center'> <img src=https://img.shields.io/badge/Linux-pass-brightgreen.svg > <img src=https://img.shields.io/badge/Device-GPU/CPU-yellow.svg > <img src=https://img.shields.io/badge/ONNXRuntime-1.17.1-turquoise.svg > <img src=https://img.shields.io/badge/MNN-2.8.2-hotpink.svg > <img src=https://img.shields.io/badge/TensorRT-10-turquoise.svg > <img src=https://img.shields.io/github/stars/xlite-dev/lite.ai.toolkit.svg?style=social > </div>

πŸ› Lite.Ai.ToolKit: A lite C++ toolkit of 100+ Awesome AI models, such as Object Detection, Face Detection, Face Recognition, Segmentation, Matting, etc. See Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub. Welcome to πŸŒŸπŸ‘†πŸ»star this repo to support me, many thanks ~ πŸŽ‰πŸŽ‰

<div align='center'> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/5b28aed1-e207-4256-b3ea-3b52f9e68aed' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/28274741-8745-4665-abff-3a384b75f7fa' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/c802858c-6899-4246-8839-5721c43faffe' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/20a18d56-297c-4c72-8153-76d4380fc9ec' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/f4dd5263-8514-4bb0-a0dd-dbe532481aff' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/b6a431d2-225b-416b-8a1e-cf9617d79a63' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/84d3ed6a-b711-4c0a-8e92-a2da05a0d04e' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/157b9e11-fc92-445b-ae0d-0d859c8663ee' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/ef0eeabe-6dbe-4837-9aad-b806a8398697' style="height:80px;width:80px;object-fit:cover;"> </div>

πŸ“– News πŸ”₯πŸ”₯

<div id="news"></div>
  • [2026/03] Cache-DiT πŸŽ‰v1.3.0 release is ready, the major updates including: Ring Attention w/ batched P2P, USP (Hybrid Ring and Ulysses), Hybrid 2D and 3D Parallelism (πŸ’₯USP + TP), VAE-P Comm overhead reduce.

arch

Citations πŸŽ‰πŸŽ‰

@misc{lite.ai.toolkit@2021,
  title={lite.ai.toolkit: A lite C++ toolkit of 100+ Awesome AI models.},
  url={https://github.com/xlite-dev/lite.ai.toolkit},
  note={Open-source software available at https://github.com/xlite-dev/lite.ai.toolkit},
  author={xlite-dev, wangzijian1010 etc},
  year={2021}
}

Features πŸ‘πŸ‘‹

  • Simply and User friendly. Simply and Consistent syntax like lite::cv::Type::Class, see examples.
  • Minimum Dependencies. Only OpenCV and ONNXRuntime are required by default, see build.
  • Many Models Supported. 300+ C++ implementations and 500+ weights πŸ‘‰ Supported-Matrix.

Build πŸ‘‡πŸ‘‡

Download prebuilt lite.ai.toolkit library from tag/v0.2.0, or just build it from source:

git clone --depth=1 https://github.com/xlite-dev/lite.ai.toolkit.git  # latest
cd lite.ai.toolkit && sh ./build.sh # >= 0.2.0, support Linux only, tested on Ubuntu 20.04.6 LTS

Quick Start 🌟🌟

<div id="lite.ai.toolkit-Quick-Start"></div>

Example0: Object Detection using YOLOv5. Download model from Model-Zoo<sup>2</sup>.

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  delete yolov5;
  return 0;
}

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/xlite-dev/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg

πŸŽ‰πŸŽ‰TensorRT: Boost inference performance with NVIDIA GPU via TensorRT.

Run bash ./build.sh tensorrt to build lite.ai.toolkit with TensorRT support, and then test yolov5 with the codes below. NOTE: lite.ai.toolkit need TensorRT 10.x (or later) and CUDA 12.x (or later). Please check build.sh, tensorrt-linux-x86_64-install.zh.md, test_lite_yolov5.cpp and NVIDIA/TensorRT for more details.

// trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engine
auto *yolov5 = new lite::trt::cv::detection::YOLOV5(engine_path);

Quick Setup πŸ‘€

To quickly setup lite.ai.toolkit, you can follow the CMakeLists.txt listed as belows. πŸ‘‡πŸ‘€

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Mixed with MNN or ONNXRuntime πŸ‘‡πŸ‘‡

The goal of lite.ai.toolkit is not to abstract on top of MNN and ONNXRuntime. So, you can use lite.ai.toolkit mixed with MNN(-DENABLE_MNN=ON, default OFF) or ONNXRuntime(-DENABLE_ONNXRUNTIME=ON, default ON). The lite.ai.toolkit installation package contains complete MNN and ONNXRuntime. The workflow may looks like:

#include "lite/lite.h"
// 0. use yolov5 from lite.ai.toolkit to detect objs.
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
// 1. use OnnxRuntime or MNN to implement your own classfier.
interpreter = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_path));
// or: session = new Ort::Session(ort_env, onnx_path, session_options);
classfier = interpreter->createSession(schedule_config);
// 2. then, classify the detected objs use your own classfier ...

The included headers of MNN and ONNXRuntime can be found at mnn_config.h and ort_config.h.

<details> <summary> πŸ”‘οΈ Check the detailed Quick Start!Click here! </summary>

Download resources

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/xlite-dev/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg
tar -zxvf lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz

Write test code

write YOLOv5 example codes and name it test_lite_yolov5.cpp:

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new 

Related Skills

View on GitHub
GitHub Stars4.4k
CategoryContent
Updated1d ago
Forks775

Languages

C++

Security Score

100/100

Audited on Mar 26, 2026

No findings