Steiger
High-performance container build orchestrator with native Bazel, Nix and Docker BuildKit
Install / Use
/learn @brainhivenl/SteigerREADME
Steiger
A container build orchestrator for multi-service projects with native support for Bazel, Docker BuildKit, Ko, and Railpack. Steiger coordinates parallel builds and handles registry operations with automatic platform detection.
Project Status
- ✅ Basic building and pushing: Core functionality is stable
- ✅ Multi-service parallel builds: Working with real-time progress
- ✅ Registry integration: Push to any OCI-compliant registry
- ⏳ Dev mode: File watching and rebuild-on-change (planned)
- 🚧 Deploy: Native Kubernetes deployment support (works, but needs to be extended)
Supported Builders
Docker BuildKit
Uses Docker BuildKit with the docker-container driver for efficient, cached builds. Steiger manages the BuildKit builder instance automatically.
Requirements:
- Docker with BuildKit support
docker-containerdriver (managed by Steiger)
Railpack
Uses Railpack as a custom BuildKit frontend for automatic build plan generation. Railpack analyzes your application and generates an optimized build plan, which is then built using Docker BuildKit.
If a railpack.json config file exists in the build context, it is used directly. Otherwise, railpack prepare is run to generate one automatically.
Non-root builds
By default, Railpack images run as root. Set nonroot: true to re-package the image so it runs as a non-root user (railpack, UID/GID 1000). This adds a second build stage that creates the user, copies the home directory, and sets USER railpack.
build:
my-app:
type: railpack
context: ./my-app
nonroot: true
Requirements:
- Docker with BuildKit support (same as Docker builder)
railpackCLI
Bazel
Integrates with Bazel builds that output OCI image layouts. Works best with rules_oci for creating OCI-compatible container images.
Key difference from Skaffold: Steiger works directly with OCI image layouts, skipping the TAR export step that Skaffold requires. This allows direct pushing to registries without intermediate file formats.
Ko
Supports Ko for building Go applications into container images without Dockerfiles.
Nix
Integrates with Nix flake outputs that produce OCI images.
Requirements
- Flakes enabled (
--extra-experimental-features 'nix-command flakes') pkgs.ociTools.buildImage(available via Steiger overlay or nixpkgs#390624)
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
steiger.url = "github:brainhivenl/steiger";
};
outputs = {
nixpkgs,
steiger,
...
}: let
system = "x86_64-linux";
overlays = [steiger.overlays.ociTools];
pkgs = import nixpkgs { inherit system overlays; };
in {
steigerImages.${system} = {
default = pkgs.ociTools.buildImage {
name = "hello";
copyToRoot = pkgs.buildEnv {
name = "hello-env";
paths = [pkgs.hello];
pathsToLink = ["/bin"];
};
config.Cmd = ["/bin/hello"];
compressor = "none";
};
};
devShells.${system} = {
default = pkgs.mkShell {
packages = [steiger.packages.${system}.default];
};
};
};
}
</details>
Cross-compilation
Steiger provides a nested outputs structure for organizing packages when you need to configure cross-compilation yourself using specialized tools like crane for Rust projects.
Configuration
Enable the nested path structure by adding the following to your steiger.yaml:
build:
services:
type: nix
platformStrategy: crossSystem
packages:
service: default
This changes how packages should be organized in your flake outputs, creating a nested structure that separates build host and target systems.
Attribute Path Structure
When platformStrategy: crossSystem is enabled, packages must be organized as:
<flake-path>#steigerImages.<host-system>.<target-system>.<package-name>
Examples:
#steigerImages.x86_64-linux.aarch64-linux.default- Build on x86_64-linux, targeting aarch64-linux#steigerImages.aarch64-darwin.x86_64-linux.default- Build on aarch64-darwin, targeting x86_64-linux#steigerImages.x86_64-linux.x86_64-linux.default- Native build on x86_64-linux
This nested structure allows you to:
- Build for all combinations of host and target systems
- Configure your own cross-compilation toolchains
- Maintain clear separation between build-time and runtime dependencies
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
steiger.url = "github:brainhivenl/steiger";
crane.url = "github:ipetkov/crane";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = {
nixpkgs,
steiger,
crane,
rust-overlay,
...
}: let
systems = ["aarch64-darwin" "x86_64-darwin" "x86_64-linux" "aarch64-linux"];
overlays = [steiger.overlays.ociTools (import rust-overlay)];
# for more information see:
# https://github.com/ipetkov/crane/blob/master/examples/cross-rust-overlay/flake.nix
crateExpression = {
craneLib,
openssl,
libiconv,
lib,
pkg-config,
stdenv,
}:
craneLib.buildPackage {
src = craneLib.cleanCargoSource ./.;
strictDeps = true;
nativeBuildInputs =
[pkg-config]
++ lib.optionals stdenv.buildPlatform.isDarwin [libiconv];
buildInputs = [openssl];
};
in {
steigerImages = steiger.lib.eachCrossSystem systems (localSystem: crossSystem: let
pkgs = import nixpkgs {
system = localSystem;
inherit overlays;
};
pkgsCross = import nixpkgs {
inherit localSystem crossSystem overlays;
};
craneLib = crane.mkLib pkgsCross;
package = pkgsCross.callPackage crateExpression {inherit craneLib;};
in {
default = pkgs.ociTools.buildImage {
name = "my-service";
copyToRoot = pkgsCross.buildEnv {
name = "service-env";
paths = [
package
pkgs.dockerTools.caCertificates
];
pathsToLink = [
"/bin"
"/etc"
];
};
config.Cmd = ["/bin/${package.pname}"];
compressor = "none";
};
});
};
}
</details>
Build Caching
Steiger delegates caching to the underlying build systems rather than implementing its own cache layer:
- Docker BuildKit: Leverages BuildKit's native layer caching and build cache
- Railpack: Uses Docker BuildKit caching under the hood with Railpack's optimized build plans
- Bazel: Uses Bazel's extensive caching system (action cache, remote cache, etc.)
- Ko: Benefits from Go's build cache and Ko's layer caching
- Nix: Utilizes Nix's content-addressed store and binary cache system for reproducible, cached builds
This approach avoids cache invalidation issues and performs comparably to Skaffold in cached scenarios, with better performance in some cases.
Installation
Using cargo
cargo install steiger --git https://github.com/brainhivenl/steiger.git
Using nix
Run directly without installation:
nix run github:brainhivenl/steiger -- build
Using GitHub Actions
Use the official GitHub Action in your workflows:
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: brainhivenl/steiger-action@v1
with:
cmd: build
args: --repo ghcr.io/my-org/my-project
version: v0.0.1
The action supports these inputs:
cmd(required): The steiger command to run (default:build)args(optional): Arguments to pass to the commandversion(optional): Version of steiger to use (default:v0.0.1)
Build from source
git clone https://github.com/brainhivenl/steiger
cd steiger
cargo build --release
Configuration
Create a steiger.yml file:
build:
frontend:
type: docker
context: ./frontend
target: web # optional
dockerfile: Dockerfile.prod # optional, defaults to Dockerfile
buildArgs:
ENV: ${env} # variable substitution is supported
backend:
type: bazel
targets:
app: //cmd/server:image
migrations: //cmd/migrations:image
go-service:
type: ko
importPath: ./cmd/service
auto-detect:
type: railpack
context: ./my-app
nonroot: true # optional, run as non-root user (default: false)
flake:
type: nix
packages:
api: default # attribute path to package e.g. `outputs.packages.<system>.default`
deploy:
brainpod:
type: helm
path: helm
namespace: my-app
valuesFiles:
- helm/values.yaml
insecureRegistries:
- my-registry.localhost:5000
profiles:
prod:
env: prod
Bazel Configuration
For Bazel builds, ensure your targets produce OCI image layouts:
# BUILD.bazel
load("@rules_oci//oci:defs.bzl", "oci_image")
oci_image(
name = "image",
base = "@distroless_base",
entrypoint = ["/app"],
tars = [":app_layer"],
)
Platform-specific builds:
build:
multi-arch:
type: bazel
platforms:
linux/amd64: //platforms:linux_amd64
linux/arm64: //platforms:linux_arm64
targets:
app: //cmd/app:image
Usage
Build All Services
steiger build
Build and Push
steiger build --repo gcr.io/my-project
This will:
- Build all services in parallel
- Push to `gcr
