LightNet
A light-weight neural network optimizer for different software/hardware backends.
Install / Use
/learn @zhaozhixu/LightNetREADME
LightNet
A light weight neural network optimizer framework for different software/hardware backends.
Overview
LightNet is a lightweight neural network optimizer framework for different software/hardware backends.
See Documentation for its full usage and technical details.
Installation
Requirements
The following steps have been tested for Ubuntu 18.04 but should work with other distros as well.
Most required packages can be installed using the following commands
(sudo permission may be required):
$ sudo apt-get install build-essential perl git pkg-config check libjson-perl
This project also depends on TensorLight, a lightweight tensor operation library. Install it according to its repository before continuing to build LightNet.
-
(Optional) Packages for building documents
Use the following commands to install the packages for building documents:
$ sudo apt-get install python3-pip $ pip3 install --user mkdocs markdown>=3.1.1 pygments -
(Optional) Packages for building python packages
Use the following commands to install the packages for building python packages (python3 for example):
$ sudo apt-get install python3-setuptools -
(Optional) CUDA dependency
You also have to install CUDA 8.0 (or later) according to its website CUDA Toolkit if you want to build with CUDA support. Remember to put
nvcc(usually in/usr/local/cuda/bin) in environment variablePATH. -
(Optional) cuDNN dependency
You also have to install CUDA 8.0 (or later) and cuDNN 7.0 (or later) libraries, according to their websites CUDA Toolkit and cuDNN if you want to build with cuDNN support.
-
(Optional) TensorRT dependency
You also have to install CUDA 8.0 (or later) and TensorRT 3.0 (or later) libraries, according to their websites CUDA Toolkit and TensorRT if you want to build with TensorRT support.
-
(Optional) Other packages for development
NOTE The perl packages listed here are used under v5.26.1. If you are using a more recent perl version, there is a chance they are not able to be installed. I recommand using perlbrew to install and switch to perl-5.26.1 (there is another benefit: the
cpancommands can be used withoutsudo).$ sudo cpan install Clone Devel::Confess Parse::RecDescent
Building and Installation
-
Clone this repository to your local directory.
$ cd <my_working_directory> $ git clone https://github.com/zhaozhixu/LightNet.git $ cd LightNet -
Configure and build
First, configure your installation using:
$ chmod +x configure $ ./configureThere are options to custom your building and installation process. You can append them after
./configure. For example, use$ ./configure --install-dir=DIRto set the installation directory (default is
/usr/local). Use$ ./configure --with-cuda=yesif you want to build with CUDA support.
Detailed
./configureoptions can be displayed using./configure -h.After that, use
maketo compile the binaries, and run the test. Finally, usemake installto install the build directory into the installation directory. -
Other
makeoptionsUse
make infoto see othermakeoptions. Especially, you can usemake cleanto clean up the build directory and all object files, andmake uninstallto remove installed files from the installation directory.
After compilation and installation, the following components will be installed:
lightnet: LightNet command line toolliblightnet.so: LightNet runtime libraryil2json: LightNet intermediate language (IL) interpreterpylightnet(optional): LightNet python wrapper package
Usage
LightNet provides 3 interfaces which can be used by developers and users:
Command Line Interface
From the command line, type lightnet -h to display the program's usage.
The command line interface is designed for two purposes:
- Compile a neural network model to an optimized model on the specified target platform to be run later with CLI or API.
- Run the optimized model with fixed data for debug/test purpose.
By default, lightnet both compiles and runs the model with zeroed data.
It accepts models in the form of
Intermediate Representation
written in JSON. For a simple example, suppose we have a tiny network composed of
three operators: create, slice, and print, which create a tensor,
slice it in some dimensions and print it to stdout, as described
here.
If we save the above tiny model in example.json, execute the following
command, and we will get print's message:
$ lightnet example.json
tensor2:
[[2.000 3.000 4.000]
[6.000 7.000 8.000]]
info: run time: 0.000019s
There is a more complicated example. To play with this example, LightNet should be configured with
--with-tensorrt=yes
Suppose there is a YOLOv3
model in yolov3.json generated by il2json (described in
Model Format section later):
$ il2json protos/net/yolov3.net -o yolov3.json
Then the following command will compile the yolov3.json model
for the TensorRT
platform, save the compiled model in out.json,
and run the compiled model with zeroed data.
$ lightnet -t tensorrt yolov3.json
info: run time: 0.015035s
As we can see, combined with il2json utility, more human-readable IL (
Intermediate Language)
models can also be accepted. In the following command, il2json first reads
the IL model in protos/net/yolov3.net and prints converted
IR model to the pipe, then lightnet reads the IR model from the pipe
(the trialing - tells lightnet to read from stdin), compiles and runs it.
$ il2json protos/net/yolov3.net | lightnet -t tensorrt -
info: run time: 0.014997s
To just compile a model without running it, use the -c option. Default output
file name is out.json, and -o option can change the output file name.
$ lightnet -t tensorrt -c -o yolov3-compiled.json yolov3.json
And then we can just run the compiled model efficiently without compilation,
use the -r option.
$ lightnet -r yolov3-compiled.json
info: run time: 0.014979s
And if you have prepared a specific
data file
for the tensors in the model (maybe the weights), you can specify it with
-f option.
$ lightnet -r yolov3-compiled.json -f your-datafile.wts
The -f option in lightnet may specifies the data file for both compilation
and runtime.
Some target platform may accept a data file in the compilation phase,
such as the tensorrt platform, which enables the serialization of the tensorrt
model, and makes the start up of the computation faster, see examples below.
C API
An example using the C API is in example/object-detect.c.
This example performs an single-object detection algorithm using TensorRT, with ShuffleNetV2 as back-bone conv-net and SqueezeDet's detection part as feature expression algorithm.
To play with this demo, LightNet should be configured with
--with-tensorrt=yes
And to minimalize dependencies, this demo uses
libjpeg to read JPEG files, which requires
libjpeg-dev package to be installed, and can be installed in Ubuntu via
$ sudo apt-get install libjpeg-dev
After the installation of LightNet, you need to enter the
example directory and compile the demo:
$ cd example
$ make
Then, in the example directory, enter the following commands to run the model::
$ il2json data/shuffledet_dac.net -o out.json
$ ./object-detect out.json data/shuffledet_dac.wts data/images
Or, compile the model first, run the model later:
$ il2json data/shuffledet_dac.net | lightnet -c -t tensorrt -f data/shuffledet_dac.wts -
$ ./object-detect -r out.json data/shuffledet_dac.wts data/images
The -f option in lightnet specifies a data file for compilation.
Some target platform may accept a data file in the compilation phase,
such as the tensorrt platform, which enables the serialization of the tensorrt
model, and makes the start up of the computation faster.
And you should get a series of bounding boxes coordinates (xmin, ymin, xmax, ymax), one for an input image, printed in the terminal like this:
... some TensorRT log ...
[197.965088, 163.387146, 275.853577, 323.011322]
[197.478195, 161.533936, 276.260590, 322.812714]
[197.014648, 158.862747, 276.261810,
Related Skills
node-connect
344.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
96.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
344.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
344.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
