Bolt
Bolt is a deep learning library with high performance and heterogeneous flexibility.
Install / Use
/learn @huawei-noah/BoltREADME
Introduction
Bolt is a light-weight library for deep learning. Bolt, as a universal deployment tool for all kinds of neural networks, aims to automate the deployment pipeline and achieve extreme acceleration. Bolt has been widely deployed and used in many departments of HUAWEI company, such as 2012 Laboratory, CBG and HUAWEI Product Lines. If you have questions or suggestions, you can submit issue. QQ群: 833345709
Why Bolt is what you need?
- High Performance: 15%+ faster than existing open source acceleration libraries.
- Rich Model Conversion: support Caffe, ONNX, TFLite, Tensorflow.
- Various Inference Precision: support FP32, FP16, INT8, 1-BIT.
- Multiple platforms: ARM CPU(v7, v8, v8.2+, v9), X86 CPU(AVX2, AVX512), GPU(Mali, Qualcomm, Intel, AMD)
- Bolt is the first to support NLP and also supports common CV applications.
- Minimize ROM/RAM
- Rich Graph Optimization
- Efficient Thread Affinity Setting
- Auto Algorithm Tuning
- Time-Series Data Acceleration
See more excellent features and details here
Building Status
There are some common used platform for inference. More targets can be seen from scripts/target.sh. Please make a suitable choice depending on your environment. If you want to build on-device training module, you can add --train option. If you want to use multi-threads parallel, you can add --openmp option. If you want to build for cortex-M or cortex-A7 with restricted ROM/RAM(Sensor, MCU), you can see docs/LITE.md.
Bolt defaultly link static library, This may cause some problem on some platforms. You can use --shared option to link shared library.
| target platform | precision | build command | Linux | Windows | MacOS |
| ---------------------- | ------------------ | ---------------------------------------------------- | ----- | ------- | ----- |
| Android(armv7) | fp32,int8 | ./install.sh --target=android-armv7 | |
|
|
| Android(armv8) | fp32,int8 | ./install.sh --target=android-aarch64 --fp16=off |
|
|
|
| Android(armv8.2+) | fp32,fp16,int8,bnn | ./install.sh --target=android-aarch64 |
|
|
|
| Android(armv9) | fp32,fp16,bf16,int8,bnn | ./install.sh --target=android-aarch64_v9 |
|
|
|
| Android(gpu) | fp16 | ./install.sh --target=android-aarch64 --gpu |
|
|
|
| Android(x86_64) | fp32,int8 | ./install.sh --target=android-x86_64 |
|
|
|
| iOS(armv7) | fp32,int8 | ./install.sh --target=ios-armv7 | / | / |
|
| iOS(armv8) | fp32,int8 | ./install.sh --target=ios-aarch64 --fp16=off | / | / |
|
| iOS(armv8.2+) | fp32,fp16,int8,bnn | ./install.sh --target=ios-aarch64 | / | / |
|
| Linux(armv7) | fp32,int8 | ./install.sh --target=linux-armv7_blank |
| / | / |
| Linux(armv8) | fp32,int8 | ./install.sh --target=linux-aarch64_blank --fp16=off |
| / | / |
| Linux(armv8.2+) | fp32,fp16,int8,bnn | ./install.sh --target=linux-aarch64_blank |
| / | / |
| Linux(x86_64) | fp32,int8 | ./install.sh --target=linux-x86_64 |
| / | / |
| Linux(x86_64_avx2) | fp32 | ./install.sh --target=linux-x86_64_avx2 |
| / | / |
| Linux(x86_64_avx512) | fp32,int8 | ./install.sh --target=linux-x86_64_avx512 |
| / | / |
| Windows(x86_64) | fp32,int8 | ./install.sh --target=windows-x86_64 | / |
| / |
| Windows(x86_64_avx2) | fp32 | ./install.sh --target=windows-x86_64_avx2 | / | [
]

