Lite.AI.toolkit
π A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.π
Install / Use
/learn @xlite-dev/Lite.AI.toolkitREADME
π Lite.Ai.ToolKit: A lite C++ toolkit of 100+ Awesome AI models, such as Object Detection, Face Detection, Face Recognition, Segmentation, Matting, etc. See Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub. Welcome to πππ»star this repo to support me, many thanks ~ ππ
<div align='center'> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/5b28aed1-e207-4256-b3ea-3b52f9e68aed' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/28274741-8745-4665-abff-3a384b75f7fa' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/c802858c-6899-4246-8839-5721c43faffe' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/20a18d56-297c-4c72-8153-76d4380fc9ec' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/f4dd5263-8514-4bb0-a0dd-dbe532481aff' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/b6a431d2-225b-416b-8a1e-cf9617d79a63' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/84d3ed6a-b711-4c0a-8e92-a2da05a0d04e' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/157b9e11-fc92-445b-ae0d-0d859c8663ee' style="height:80px;width:80px;object-fit:cover;"> <img src='https://github.com/xlite-dev/lite.ai.toolkit/assets/31974251/ef0eeabe-6dbe-4837-9aad-b806a8398697' style="height:80px;width:80px;object-fit:cover;"> </div>π News π₯π₯
<div id="news"></div>- [2026/03] Cache-DiT πv1.3.0 release is ready, the major updates including: Ring Attention w/ batched P2P, USP (Hybrid Ring and Ulysses), Hybrid 2D and 3D Parallelism (π₯USP + TP), VAE-P Comm overhead reduce.

- Most of my time now is focused on LLM/VLM Inference. Please check πAwesome-LLM-Inference
and πLeetCUDA
for more details. Now, lite.ai.toolkit
is mainly maintained by π@wangzijian1010.
Citations ππ
@misc{lite.ai.toolkit@2021,
title={lite.ai.toolkit: A lite C++ toolkit of 100+ Awesome AI models.},
url={https://github.com/xlite-dev/lite.ai.toolkit},
note={Open-source software available at https://github.com/xlite-dev/lite.ai.toolkit},
author={xlite-dev, wangzijian1010 etc},
year={2021}
}
Features ππ
- Simply and User friendly. Simply and Consistent syntax like lite::cv::Type::Class, see examples.
- Minimum Dependencies. Only OpenCV and ONNXRuntime are required by default, see build.
- Many Models Supported. 300+ C++ implementations and 500+ weights π Supported-Matrix.
Build ππ
Download prebuilt lite.ai.toolkit library from tag/v0.2.0, or just build it from source:
git clone --depth=1 https://github.com/xlite-dev/lite.ai.toolkit.git # latest
cd lite.ai.toolkit && sh ./build.sh # >= 0.2.0, support Linux only, tested on Ubuntu 20.04.6 LTS
Quick Start ππ
<div id="lite.ai.toolkit-Quick-Start"></div>Example0: Object Detection using YOLOv5. Download model from Model-Zoo<sup>2</sup>.
#include "lite/lite.h"
int main(int argc, char *argv[]) {
std::string onnx_path = "yolov5s.onnx";
std::string test_img_path = "test_yolov5.jpg";
std::string save_img_path = "test_results.jpg";
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolov5;
return 0;
}
You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.
export LITE_AI_TAG_URL=https://github.com/xlite-dev/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg
ππTensorRT: Boost inference performance with NVIDIA GPU via TensorRT.
Run bash ./build.sh tensorrt to build lite.ai.toolkit with TensorRT support, and then test yolov5 with the codes below. NOTE: lite.ai.toolkit need TensorRT 10.x (or later) and CUDA 12.x (or later). Please check build.sh, tensorrt-linux-x86_64-install.zh.md, test_lite_yolov5.cpp and NVIDIA/TensorRT for more details.
// trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engine
auto *yolov5 = new lite::trt::cv::detection::YOLOV5(engine_path);
Quick Setup π
To quickly setup lite.ai.toolkit, you can follow the CMakeLists.txt listed as belows. ππ
set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})
Mixed with MNN or ONNXRuntime ππ
The goal of lite.ai.toolkit is not to abstract on top of MNN and ONNXRuntime. So, you can use lite.ai.toolkit mixed with MNN(-DENABLE_MNN=ON, default OFF) or ONNXRuntime(-DENABLE_ONNXRUNTIME=ON, default ON). The lite.ai.toolkit installation package contains complete MNN and ONNXRuntime. The workflow may looks like:
#include "lite/lite.h"
// 0. use yolov5 from lite.ai.toolkit to detect objs.
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
// 1. use OnnxRuntime or MNN to implement your own classfier.
interpreter = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_path));
// or: session = new Ort::Session(ort_env, onnx_path, session_options);
classfier = interpreter->createSession(schedule_config);
// 2. then, classify the detected objs use your own classfier ...
The included headers of MNN and ONNXRuntime can be found at mnn_config.h and ort_config.h.
<details> <summary> ποΈ Check the detailed Quick StartοΌClick here! </summary>Download resources
You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.
export LITE_AI_TAG_URL=https://github.com/xlite-dev/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg
tar -zxvf lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
Write test code
write YOLOv5 example codes and name it test_lite_yolov5.cpp:
#include "lite/lite.h"
int main(int argc, char *argv[]) {
std::string onnx_path = "yolov5s.onnx";
std::string test_img_path = "test_yolov5.jpg";
std::string save_img_path = "test_results.jpg";
auto *yolov5 = new
Related Skills
docs-writer
99.3k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
338.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
ddd
GuΓa de Principios DDD para el Proyecto > π Documento Complementario : Este documento define los principios y reglas de DDD. Para ver templates de cΓ³digo, ejemplos detallados y guΓas paso
zola-ai
An autonomous Solana wallet agent that executes payments via Twitter mentions and an in-app dashboard, powered by Claude.
