InspireFace
InspireFace is a cross-platform face recognition SDK developed in C/C++, supporting multiple operating systems and various backend types for inference, such as CPU, GPU, and NPU.
Install / Use
/learn @HyperInspire/InspireFaceREADME
InspireFace
InspireFace is a cross-platform face recognition SDK developed in C/C++, supporting multiple operating systems and various backend types for inference, such as CPU, GPU, and NPU.
If you require further information on tracking development branches, CI/CD processes, or downloading pre-compiled libraries, please visit our development repository.
Please contact contact@insightface.ai using your company e-mail for commercial support, including obtaining and integrating higher accuracy models, as well as custom development.
<img src="images/banner.jpg" alt="banner" style="zoom:80%;" />📘 Documentation is a work in progress.
We welcome your questions💬, they help guide and accelerate its development.
Change Logs
2025-08-03 Add a multi-link model download channel for the Python-SDK.
2025-06-15 The ErrorCode-Table has been reorganized and streamlined.
2025-06-08 Add facial expression recognition.
2025-04-27 Optimize some issues and provide a stable version.
2025-03-16 Acceleration using NVIDIA-GPU (CUDA) devices is already supported.
2025-03-09 Release of android sdk in JitPack.
2025-02-20 Upgrade the face landmark model.
2025-01-21 Update all models to t3 and add tool to convert cosine similarity to percentage.
2025-01-08 Support inference on Rockchip devices RK3566/RK3568 NPU.
2024-12-25 Add support for optional RKRGA image acceleration processing on Rockchip devices.
2024-12-22 Started adapting for multiple Rockchip devices with NPU support, beginning with RV1103/RV1106 support.
2024-12-10 Added support for quick installation via Python package manager.
2024-10-09 Added system resource monitoring and session statistics.
2024-09-30 Fixed some bugs in the feature hub.
2024-08-18 Updating Benchmark: Using CoreML with Apple's Neural Engine (ANE) on the iPhone 13, the combined processes of Face Detection + Alignment + Feature Extraction take less than 2ms.
2024-07-17 Add global resource statistics monitoring to prevent memory leaks.
2024-07-07 Add some face action detection to the face interaction module.
2024-07-05 Fixed some bugs in the python ctypes interface.
2024-07-03 Add the blink detection algorithm of face interaction module.
2024-07-02 Fixed several bugs in the face detector with multi-level input.
License
The licensing of the open-source models employed by InspireFace adheres to the same requirements as InsightFace, specifying their use solely for academic purposes and explicitly prohibiting commercial applications.
Quick Start
For Python users on Linux and MacOS, InspireFace can be quickly installed via pip:
pip install -U inspireface
After installation, you can use inspireface like this:
import cv2
import inspireface as isf
# Create a session with optional features
opt = isf.HF_ENABLE_NONE
session = isf.InspireFaceSession(opt, isf.HF_DETECT_MODE_ALWAYS_DETECT)
# Load the image using OpenCV.
image = cv2.imread(image_path)
# Perform face detection on the image.
faces = session.face_detection(image)
for face in faces:
x1, y1, x2, y2 = face.location
rect = ((x1, y1), (x2, y2), face.roll)
# Calculate center, size, and angle
center = ((x1 + x2) / 2, (y1 + y2) / 2)
size = (x2 - x1, y2 - y1)
angle = face.roll
# Apply rotation to the bounding box corners
rect = ((center[0], center[1]), (size[0], size[1]), angle)
box = cv2.boxPoints(rect)
box = box.astype(int)
# Draw the rotated bounding box
cv2.drawContours(image, [box], 0, (100, 180, 29), 2)
cv2.imshow("face detection", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
⚠️The project is currently in a rapid iteration phase, before each update, please pull the latest model from the remote side!
import inspireface
for model in ["Pikachu", "Megatron"]:
inspireface.pull_latest_model(model)
More examples can be found in the python directory.
Preparation
Clone 3rdparty
Clone the 3rdparty repository from the remote repository into the root directory of the project. Note that this repository contains some submodules. When cloning, you should use the --recurse-submodules parameter, or after entering the directory, use git submodule update --init --recursive to fetch and synchronize the latest submodules:
# Must enter this directory
cd InspireFace
# Clone the repository and pull submodules
git clone --recurse-submodules https://github.com/tunmx/inspireface-3rdparty.git 3rdparty
If you need to update the 3rdparty repository to ensure it is current, or if you didn't use the --recursive parameter during the initial pull, you can run git submodule update --init --recursive:
# Must enter this directory
cd InspireFace
# If you're not using recursive pull
git clone https://github.com/tunmx/inspireface-3rdparty.git 3rdparty
cd 3rdparty
git pull
# Update submodules
git submodule update --init --recursive
Downloading Model Package Files
You can download the model package files containing models and configurations needed for compilation from Release Page and extract them to any location.
You can use the command/download_models_general.sh command to download resource files, which will be downloaded to the test_res/pack directory. This way, when running the Test program, it can access and read the resource files from this path by default.
⚠️The project is currently in a rapid iteration phase, before each update, please pull the latest model from the remote side!
# Download lightweight resource files for mobile device
bash command/download_models_general.sh Pikachu
# Download resource files for mobile device or PC/server
bash command/download_models_general.sh Megatron
# Download resource files for RV1109
bash command/download_models_general.sh Gundam_RV1109
# Download resource files for RV1106
bash command/download_models_general.sh Gundam_RV1106
# Download resource files for RK356X
bash command/download_models_general.sh Gundam_RK356X
# Download resource files for RK3588
bash command/download_models_general.sh Gundam_RK3588
# Download resource files for NVIDIA-GPU Device(TensorRT)
bash command/download_models_general.sh Megatron_TRT
# Download all model files
bash command/download_models_general.sh
Installing MNN
The '3rdparty' directory already includes the MNN library and specifies a particular version as the stable version. If you need to enable or disable additional configuration options during compilation, you can refer to the CMake Options provided by MNN. If you need to use your own precompiled version, feel free to replace it.
Requirements
-
CMake (version 3.20 or higher)
-
NDK (version 16 or higher, only required for Android) [Optional]
-
MNN (version 3.x or higher)
-
C++ Compiler
- Either GCC or Clang can be used (macOS does not require additional installation as Xcode is included)
- Recommended GCC version is 4.9 or higher
- Note that in some distributions, GCC (GNU C Compiler) and G++ (GNU C++ Compiler) are installed separately.
- For instance, on Ubuntu, you need to install both gcc and g++
- Recommended Clang version is 3.9 or higher
- Recommended GCC version is 4.9 or higher
- arm-linux-gnueabihf (for RV1109/RV1126) [Optional]
- Prepare the cross-compilation toolchain in advance, such as gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf
- Either GCC or Clang can be used (macOS does not require additional installation as Xcode is included)
-
CUDA (version 11.x or higher) [Optional]
- GPU-based inference requires installing NVIDIA's CUDA dependencies on the device.
-
TensorRT (version 10 or higher) [Optional]
-
Eigen3
-
RKNN [Optional]
- Adjust and select versions currently supported for specific requirements.
Compilation
CMake option are used to control the various details of the compilation phase. Please select according to your actual requirements. CMake Option.
Local Compilation
If you are using macOS or Linux, you can quickly compile using the shell scripts provided in the command folder at the project root:
cd InspireFace/
# Execute the local compilation script
bash command/build.sh
After compilation, you can find the
