Asu
An image on demand server for OpenWrt based distributions
Install / Use
/learn @openwrt/AsuREADME
Attendedsysupgrade Server (GSoC 2017)
This project simplifies the sysupgrade process for upgrading the firmware of
devices running OpenWrt or distributions based on it. These tools offer an easy
way to reflash the router with a new firmware version
(including all packages) without the need to use opkg.
ASU is based on an API to request custom firmware images with any selection of packages pre-installed. This avoids the need to set up a build environment, and makes it possible to create a custom firmware image even using a mobile device.
Clients of the Sysupgrade Server
OpenWrt Firmware Selector
Simple web interface using vanilla JavaScript currently developed by @mwarning. It offers a device search based on model names and show links either to official images or requests images via the asu API. Please join in the development at GitLab repository

LuCI app
The package
luci-app-attendedsysupgrade
offers a simple tool under System > Attended Sysupgrade. It requests a new
firmware image that includes the current set of packages, waits until it's built
and flashes it. If "Keep Configuration" is checked in the GUI, the device
upgrades to the new firmware without any need to re-enter any configuration or
re-install any packages.

CLI
With OpenWrt SNAPSHOT-r26792 or newer (and in the 24.10 release) the CLI app
auc was replaced
with owut
as a more comprehensive CLI tool to provide an easy way to upgrade your device.

Server
The server listens for image requests and, if valid, automatically generates them. It coordinates several OpenWrt ImageBuilders and caches the resulting images in a Redis database. If an image is cached, the server can provide it immediately without rebuilding.
Active server
[!NOTE] Official server using ImageBuilder published on OpenWrt Downloads.
[!NOTE] Unofficial servers, may run modified ImageBuilder
- ImmortalWrt
- LibreMesh (only
stableandoldstableOpenWrt versions) - sysupgrade.guerra24.net
- Create a pull request to add your server here
Run your own server
For security reasons each build happens inside a container so that one build can't affect another build. For this to work a Podman container runs an API service so workers can themselfs execute builds inside containers.
Installation
The server uses podman-compose to manage the containers. On a Debian based
system, install the following packages:
sudo apt install podman-compose
A Python library is used to
communicate with Podman over a socket. To enable the socket either systemd is
required or the socket must be started manually using the Podman itself:
# systemd
systemctl --user enable podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket
# manual (must stay open)
podman system service --time=0 unix:/run/user/$(id -u)/podman/podman.sock
Now you can either use the latest ASU containers or build them yourself, run either of the following two commands:
# use existing containers
podman-compose pull
# build containers locally
podman-compose build
The services are configured via environment variables, which can be set in a
.env file
echo "PUBLIC_PATH=$(pwd)/public" > .env
echo "CONTAINER_SOCKET_PATH=/run/user/$(id -u)/podman/podman.sock" >> .env
# optionally allow custom scripts running on first boot
echo "ALLOW_DEFAULTS=1" >> .env
Now it's possible to run all services via podman-compose:
podman-compose up -d
This will start the server, the Podman API container and one worker. Once the
server is running, it's possible to request images via the API on
http://localhost:8000. Modify podman-compose.yml to change the port.
Production
For production it's recommended to use a reverse proxy like nginx or caddy.
You can find a Caddy sample configuration in misc/Caddyfile.
If you want your server to remain active after you log out of the server, you
must enable "linger" in loginctl:
loginctl enable-linger
System requirements
- 2 GB RAM (4 GB recommended)
- 2 CPU cores (4 cores recommended)
- 50 GB disk space (200 GB recommended)
Squid Cache
Instead of creating and uploading SNAPSHOT ImageBuilder containers everyday,
only a container with installed dependencies and a setup.sh script is offered.
ASU will automatically run that script and setup the latest ImageBuilder. To
speed up the process, a Squid cache can be used to store the ImageBuilder
archives locally. To enable the cache, set SQUID_CACHE=1 in the .env file.
To have the cache accessible from running containers, the Squid port 3128 inside
a running container must be forwarded to the host. This can be done by adding
the following line to the .config/containers/containers.conf file:
[network]
pasta_options = [
"-a", "10.0.2.0",
"-n", "24",
"-g", "10.0.2.2",
"--dns-forward", "10.0.2.3",
"-T", "3128:3128"
]
If you know a better setup, please create a pull request.
Development
After cloning this repository, install uv which manages the Python
dependencies.
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv sync --extra dev
Running the server
uv run fastapi dev asu/main.py
Running a worker
source .env
uv run rq worker
API
The API is documented via OpenAPI and can be viewed interactively on the server:
Related Skills
node-connect
339.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
83.9kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
339.5kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
