SkillAgentSearch skills...

SlangWebGPU

A possible use of Slang shader compiler together with WebGPU in C++ (both in native and Web contexts), using CMake.

Install / Use

/learn @eliemichel/SlangWebGPU
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="doc/logo-light.svg"> <source media="(prefers-color-scheme: light)" srcset="doc/logo-dark.svg"> <img alt="Learn WebGPU Logo" src="doc/logo.svg" width="300"> </picture> <p> <a href="https://github.com/eliemichel/LearnWebGPU">LearnWebGPU</a> &nbsp;|&nbsp; <a href="https://github.com/eliemichel/WebGPU-Cpp">WebGPU-C++</a> &nbsp;|&nbsp; <a href="https://github.com/eliemichel/WebGPU-distribution">WebGPU-distribution</a> <br/> <a href="https://github.com/eliemichel/glfw3webgpu">glfw3webgpu</a> &nbsp;|&nbsp; <a href="https://github.com/eliemichel/sdl2webgpu">sdl2webgpu</a> &nbsp;|&nbsp; <a href="https://github.com/eliemichel/sdl3webgpu">sdl3webgpu</a> </p> <p> <a href="https://discord.gg/2Tar4Kt564"><img src="https://img.shields.io/static/v1?label=Discord&message=Join%20us!&color=blue&logo=discord&logoColor=white" alt="Discord | Join us!"/></a> </p> </div>

Slang x WebGPU

This is a demo of a possible use of Slang shader compiler together with WebGPU in C++ (both in native and Web contexts), using CMake.

Key Features

  • The CMake setup fetches Slang compiler and a WebGPU implementation.
  • Slang shaders are compiled into WGSL upon compilation, with CMake taking care of dependencies.
  • Slang reflection API is used to auto-generate boilerplate binding code on the C++ side.
  • Example of auto-differentiation features of Slang are given.
  • Build instruction for native and Web targets.
  • Provides a add_slang_shader and add_slang_webgpu_kernel to help managing Slang shader targets (in cmake/SlangUtils.cmake).

Outline

Notes

[!WARNING] The WebGPU API is still a Work in Progress at the time I write these lines. To make sure this setup works, use the very webgpu directory provided in this repository, which fetches the very version of the API for which it was developed (which is Dawn-specific, BTW). When using emscripten, use version 3.1.72.

[!NOTE] This example relies on the webgpu.hpp and webgpu-raii.hpp shallow wrapper of webgpu.h provided WebGPU-distribution. If this would be a deal-breaker for your use case, you are welcome to open an issue or a pull request that addresses this, as there should be no particular blocker to get rid of it.

🤔 Ergl, I don't want codegen, but I'm interested in the Slang to WGSL part...

Sure, have a look at examples/00_no_codegen. All it needs is cmake/FetchSlang.cmake and the add_slang_shader function from cmake/SlangUtils.cmake, so you can strip down the whole codegen part if you don't like it.

What is this?

Slang is a great modern shader programming language, but of course it does not natively run in WebGPU. One possible workaround is to ship the Slang compiler to the client, but this is heavy both in bandwidth and client computation time, so this setup rather transpiles shaders into WGSL (WebGPU shading language) at compilation time.

Compiling shaders are one thing, but when working on a shader-intensive application, one easily spends too much time writing kernel binding boilerplate rather than actuall business logic (i.e., the nice things we actually want to code). Among Slang's nice features is a reflection API, which this project benefits from to automatically generate kernel binding code.

Let us consider the following example:

// hello-world.slang
StructuredBuffer<float> buffer0;
StructuredBuffer<float> buffer1;
RWStructuredBuffer<float> result;

[shader("compute")]
[numthreads(8,1,1)]
void computeMain(uint3 threadId : SV_DispatchThreadID)
{
    uint index = threadId.x;
    result[index] = buffer0[index] + buffer1[index];
}

Our automatic code generation will create a HelloWorldKernel.h and HelloWorldKernel.cpp files that can be used as follows:

// main.cpp
#include "generated/HelloWorldKernel.h"

// (assuming 'device' is a wgpu::Device object)
generated::HelloWorldKernel kernel(device);

// (assuming 'buffer0', 'buffer1' and 'result' are wgpu::Buffer objects)
wgpu::BindGroup bindGroup = kernel.createBindGroup(buffer0, buffer1, result);

// First argument can be ThreadCount{ ... } or WorkgroupCount{ ... }
kernel.dispatch(ThreadCount{ 10 }, bindGroup);

Note for instance how the signature of createBindGroup is automatically adapted to the resources declared in the shader.

All it takes to use the generator is to use the custom add_slang_webgpu_kernel command that we define in cmake/SlangUtils.cmake:

add_slang_webgpu_kernel(
	generate_hello_world_kernel
	NAME HelloWorld
	SOURCE shaders/hello-world.slang
	ENTRY computeMain
)

The target generate_hello_world_kernel is a static library target that generates and builds HelloWorldKernel, given the Slang shader set as source.

[!NOTE] The add_slang_webgpu_kernel function can handle multiple entrypoints. For instance specifying ENTRY foo bar will generate a kernel that has a dispatchFoo() and a dispatchBar() method. For convenice, a simple dispatch() alias is defined when there is only one entrypoint.

Lastly, this repository provides a basic setup to fetch precompiled Slang library in a CMake project (see cmake/FetchSlang.cmake) that is compatible with cross-compilation (i.e. slangc executable is fetched for the host system while slang libraries are fetched -- if needed -- for the target system).

Building

This project can be used either in a fully native scenario, where the target is an executable, or in a web cross-compilation scenario, where the target is a WebAssembly module (and HTML demo page).

Compilation of a native app

Nothing surprising in this case:

# Configure the build
cmake -B build

# Compile everything
cmake ---build build

[!NOTE] This project uses CMake and tries to bundle as many dependencies as possible. However, it will fetch at configuration time the following:

  • A prebuilt version of Dawn, to get a WebGPU implementation (wgpu-native is a possible alternative, modulo some tweaks).
  • A prebuilt version of Slang, both executable and library.

You may then explore build/examples to execute the various examples.

Cross-compilation of a WebAssembly module

Cross-compilation and code generation are difficult roommates, but here is how to get them along together: we create 2 build directories.

  1. We configure a build-generator build, where we can disable the examples so that it only builds the generator. Indeed, even if the end target is a WebAssembly module, we still need the generator to build for the host system (where the compilation occurs):
# Configure a native build, to compile the code generator
# We can turn on examples (or reuse the native build done previously and skip this)
cmake -B build-native -DSLANG_WEBGPU_BUILD_EXAMPLES=OFF

[!NOTE] Setting SLANG_WEBGPU_BUILD_EXAMPLES=OFF has the nice byproduct of not fetching Dawn, because WebGPU is not needed for the generator, and the WebAssembly build has built-in support for WebGPU (so not need for Dawn).

We then build the generator target:

# Build the generator with the native build
cmake --build build-native --target slang_webgpu_generator
  1. We configure a build-web build with emcmake to put in place the cross-compilation to WebAssembly, and this time we do not build the generator (SLANG_WEBGPU_BUILD_GENERATOR=OFF), but rather tell CMake where to find it with the SlangWebGPU_Generator_DIR option:
# Configure a cross-compilation build
emcmake cmake -B build-web -DSLANG_WEBGPU_BUILD_GENERATOR=OFF -DSlangWebGPU_Generator_DIR=build-native

[!NOTE] The emcmake command is provided by emscripten. Make sure you activate your emscripten environment first (and select preferably version 3.1.72).

We can now build the WebAssembly module, which will call the generator from build-native whenever needed:

# Build the Web targets
cmake --build build-web

And it is now ready to be tested!

# Start a local server
python -m http.server 8000

Then browse for instance to:

  • http://localhost:8000/build-web/examples/00_no_codegen/slang_webgpu_example_00_no_codegen.html
  • http://localhost:8000/build-web/examples/01_simple_kernel/slang_webgpu_example_01_simple_kernel.html
  • http://localhost:8000/build-web/examples/02_multiple_entrypoints/slang_webgpu_example_02_multiple_entrypoints.html
  • http://localhost:8000/build-web/examples/03_module_import/slang_webgpu_example_03_module_import.html
  • http://localhost:8000/build-web/examples/04_uniforms/slang_webgpu_example_04_uniforms.html
  • http://localhost:8000/build-web/examples/05_autodiff/slang_webgpu_example_05_autodiff.html

Going further

This repository is only meant to be a demo. To go further, start from one of the examples and progressively turn it into something more complex. You may eventually want to move your example into src/. You will probably be tempted to tune the generator a

Related Skills

View on GitHub
GitHub Stars78
CategoryDevelopment
Updated3mo ago
Forks2

Languages

C++

Security Score

77/100

Audited on Jan 1, 2026

No findings