Wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
Install / Use
/learn @webonnx/WonnxREADME
Wonnx is a GPU-accelerated ONNX inference run-time written 100% in Rust, ready for the web.
Supported Platforms (enabled by wgpu)
API | Windows | Linux & Android | macOS & iOS | ----- | ----------------------------- | ------------------ | ------------------ | Vulkan | ✅ | ✅ | | Metal | | | ✅ | DX12 | ✅ (W10 only) | | | DX11 | :construction: | | | GLES3 | | :ok: | |
:white_check_mark: = First Class Support — :ok: = Best Effort Support — :construction: = Unsupported, but support in progress
Getting started
From the command line
Ensure your system supports either Vulkan, Metal or DX12 for access to the GPU. Then either download a binary release,
or install Rust and run cargo install --git https://github.com/webonnx/wonnx.git wonnx-cli to install the CLI.
The CLI tool (nnx) provides a convenient interface for tinkering with models (see the README for more information):
nnx info ./data/models/opt-squeeze.onnx
nnx infer ./data/models/opt-squeeze.onnx -i data=./data/images/pelican.jpeg --labels ./data/models/squeeze-labels.txt --top 3
From Rust
Add the wonnx crate as dependency (cargo add wonnx if you have cargo-add). Then, see the examples
for usage examples, or browse the API docs.
From Python
pip install wonnx
And then, to use:
from wonnx import Session
session = Session.from_path(
"../data/models/single_relu.onnx"
)
inputs = {"x": [-1.0, 2.0]}
assert session.run(inputs) == {"y": [0.0, 2.0]}
Then run python3 with the above Python code!
For more details on the Python package including build instructions, see wonnx-py.
In the browser, using WebGPU + WebAssembly
npm install @webonnx/wonnx-wasm
And then, on the client side:
import init, { Session, Input } from "@webonnx/wonnx-wasm";
// Check for WebGPU availability first: if(navigator.gpu) { .. }
await init();
const session = await Session.fromBytes(modelBytes /* Uint8Array containing the ONNX file */);
const input = new Input();
input.insert("x", [13.0, -37.0]);
const result = await session.run(input); // This will be an object where the keys are the names of the model outputs and the values are arrays of numbers.
session.free();
input.free();
The package @webonnx/wonnx-wasm provides an interface to WONNX, which is included as WebAssembly module and will use the browser's WebGPU implementation. See wonnx-wasm-example for a more complete usage example involving a bundler.
For more details on the JS/WASM package including build instructions, see wonnx-wasm.
For development
To work on wonnx itself, follow the following steps:
- Install Rust
- Install Vulkan, Metal, or DX12 for the GPU API.
- git clone this repo.
git clone https://github.com/webonnx/wonnx.git
Then, you're all set! You can run one of the included examples through cargo:
cargo run --example squeeze --release
Running other models
- To run an onnx model, first simplify it with
nnx prepare(substitute withcargo run -- preparewhen inside this repo):
nnx prepare -i ./some-model.onnx ./some-model-prepared.onnx
To specify dynamic dimension parameters, add e.g. --set batch_size=1.
You can also use an external tool, such as onnx-simplifier, with the command:
# pip install -U pip && pip install onnx-simplifier
python -m onnxsim mnist-8.onnx opt-mnist.onnx
- Then you can run it using the CLI (see README or programmatically, following the examples in the examples folder. To run an example:
cargo run --example mnist --release
Tested models
- Squeezenet
- MNIST
- BERT
GPU selection
Except when running in WebAssembly, you may set the following environment variables to influence GPU selection by WGPU:
WGPU_ADAPTER_NAMEwith a substring of the name of the adapter you want to use (e.g.1080will matchNVIDIA GeForce 1080ti).WGPU_BACKENDwith a comma separated list of the backends you want to use (vulkan,metal,dx12,dx11, orgl).WGPU_POWER_PREFERENCEwith the power preference to choose when a specific adapter name isn't specified (highorlow)
Contribution: On implementing a new Operator
Contributions are very much welcomed even without large experience in DL, WGSL, or Rust. I hope that this project can be a sandbox for all of us to learn more about those technologies beyond this project's initial scope.
To implement an operator all you have to do is:
- Add a new matching pattern in
compiler.rs - Retrieve its attributes values using the
get_attributefunction:
let alpha = get_attribute("alpha", Some(1.0), node);
// or without default value
let alpha = get_attribute::<f32>("alpha", None, node);
- Add any variable you want to use in the WGSL shader using
context. - Write a new WGSL template in the
templatesfolder.
Available types are in
structs.wgslbut you can also generate new ones within your templates.
- Respect the binding layout that each entry is incremented by 1 starting from 0, with input first and output last. If the number of binding is above 4. Increment the binding group. You can change the input within
sequencer.rs - Write the logic.
There is default variables in the context:
{{ i_lens[0] }}: the length of the input 0. This also work for output:{{ o_lens[0] }}and other input{{ i_lens[1] }}{{ i_shape[0] }}: the array of dimensions of input 0. To get the first dimension of the array, just use:{{ i_shape[0][0] }}{{ i_chunks[0] }}: the size of the chunks of each dimensions of input 0. By default, each variable is represented as a long array of values where to get to specific values you have to move by chunks. Those chunks are represented within this variable. To get the size of the chunks of the first dimensions use:{{ i_chunks[0][0] }}.{{ op_type }}the op type as some op_type like activation are using the same template.
- Test it using the utils function and place it in the tests folder. The test can look as follows:
#[test]
fn test_matmul_square_matrix() {
// USER INPUT
let n = 16;
let mut input_data = HashMap::new();
let data_a = ndarray::Array2::eye(n);
let mut data_b = ndarray::Array2::<f32>::zeros((n, n));
data_b[[0, 0]] = 0.2;
data_b[[0, 1]] = 0.5;
let sum = data_a.dot(&data_b);
input_data.insert("A".to_string(), data_a.as_slice().unwrap());
input_data.insert("B".to_string(), data_b.as_slice().unwrap());
let n = n as i64;
let model = model(graph(
vec![tensor("A", &[n, n]), tensor("B", &[n, n])],
vec![tensor("C", &[n, n])],
vec![],
vec![],
vec![node(vec!["A", "B"], vec!["C"], "MatMul", "MatMul", vec![])],
));
let session =
pollster::block_on(wonnx::Session::from_model(model)).expect("Session did not create");
let result = pollster::block_on(session.run(input_data)).unwrap();
// Note: it is better to use a method that compares floats with a tolerance to account for differences
// between implementations; see `wonnx/tests/common/mod.rs` for an example.
assert_eq!((&result["C"]).try_into().unwrap(),sum.as_slice().unwrap());
}
Check out tera documentation for other templating operation: https://tera.netlify.app/docs/
- If at any point you want to do optimisation of several nodes you can do it within
sequencer.rs.
Supported Operators (ref ONNX IR)
|Operator|Since version|Implemented|Shape inference supported| |-|-|-|-| |<a href="https://github.com/onnx/onnx/blob/main/docs/Operators.md#Abs">Abs</a>|<a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13">13</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-6">6</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-1">1</a>|✅|✅| |<a href="https://github.com/onnx/onnx/blob/main/docs/Operators.md#Acos">Acos</a>|<a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7">7</a>|✅|✅| |<a href="https://github.com/onnx/onnx/blob/main/docs/Operators.md#Acosh">Acosh</a>|<a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9">9</a>|✅|✅| |<a href="https://github.com/onnx/onnx/blob/main/docs/Operators.md#Add">Add</a>|<a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-14">14</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13">13</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-7">7</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-6">6</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-1">1</a>|✅|✅| |<a href="https://github.com/onnx/onnx/blob/main/docs/Operators.md#And">And</a>|<a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#And-7">7</a>, <a href="
Related Skills
himalaya
348.2kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
node-connect
348.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
taskflow
348.2kname: taskflow description: Use when work should span one or more detached tasks but still behave like one job with a single owner context. TaskFlow is the durable flow substrate under authoring layer
frontend-design
108.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
