Hubris
A lightweight, memory-protected, message-passing kernel for deeply embedded systems.
Install / Use
/learn @oxidecomputer/HubrisREADME
Hubris
Hubris is a microcontroller operating environment designed for deeply-embedded systems with reliability requirements. Its design was initially proposed in RFD41, but has evolved considerably since then.
Learning
Developer documentation is in Asciidoc in the doc/ directory. It gets rendered
via GitHub pages, and is available at https://oxidecomputer.github.io/hubris .
Navigating
The repo is laid out as follows.
-
app/is where the top-level binary crates for applications live, e.g.app/gimletcontains the firmware crate for Gimlet. Generally speaking, if you want to build an image for something, look here. -
build/contains the build system and supporting crates. -
chips/contains peripheral definitions and debugging support files for individual microcontrollers. -
doc/contains developer documentation. -
drv/contains drivers, a mix of simple driver lib crates and fully-fledged server bin crates. Current convention is thatdrv/SYSTEM-DEVICEis the driver forDEVICEonSYSTEM(whereSYSTEMis usually an SoC name), whereasdrv/SYSTEM-DEVICE-serveris the server bin crate. -
idl/contains interface definitions written in Idol -
lib/contains assorted utility libraries we've written. If you need to make a reusable crate that doesn't fit into one of the other directories, it probably belongs here. -
support/contains some interface and programming support files, like fake certificates and programmer firmware images. -
sys/contains the "system" bits of Hubris, namely the kernel (sys/kern), the shared crate defining the ABI (sys/abi), and the user library used by tasks (sys/userlib). -
task/contains reusable tasks that aren't drivers. The distinction between things that live intaskvs indrv/something-serveris fuzzy. Use your judgement. -
test/contains the test framework and binary crates for building it for various boards. -
website/contains the source code for the hubris website
Developing
We currently support Linux and Windows as first-tier platforms. Because of some inconsistency in linker behavior across platforms, images may not match across platforms, so we recommend choosing one as your "official" build platform for producing blessed images. macOS and Illumos are both used as informal build platforms by Oxide employees, though they are not currently tested in CI. (Oxide blessed images are produced on Linux.)
To submit changes for review, push them to a branch in a fork and submit a pull
request to merge that branch into master. For details, see
CONTRIBUTING.md.
Prereqs
You will need:
-
A
rustup-based toolchain install.rustupwill take care of automatically installing our pinned toolchain version, and the cross-compilation targets, when you first try to build. -
libusb, typically found from your system's package manager as
libusb-1.0.0or similar. -
libfdti1, found as
libftdi1-devor similar. -
If you will be running GDB, you should install
arm-none-eabi-gdb. This is typically from your system's package manager with a package name likearm-none-eabi-gdborgdb-multiarch. macOS users can runbrew install --cask gcc-arm-embeddedto install the official ARM binaries. -
The Hubris debugger, Humility. Note that
cargo installinteracts strangely with therust-toolchain.tomlfile present in the root of this repository; if you run the following command verbatim to install Humility, do so from a different directory:cargo install --git https://github.com/oxidecomputer/humility.git --locked humility-bin- Requires
cargo-readmeas a dependency:cargo install cargo-readme
- Requires
Windows
There are three alternative ways to install OpenOCD:
See here for getting the source of openocd
or get unofficial binaries.
Alternatively, you can install with chocolatey:
> choco install openocd
Lastly, you could install openocd with scoop:
> scoop bucket add extras
> scoop install openocd
Note: openocd installed via scoop has proven problematic for some
users. If you experience problems, try installing via choco or from source
(see above).
To use the ST-Link programmer, you'll probably need to install this driver.
It's not necessary to build and run Hubris, but if you want to communicate over a serial link (and that's not supported by your terminal), you'll want to use PuTTY; this guide does a good job of explaining how.
Build
We do not use cargo build or cargo run directly because they are too
inflexible for our purposes. We have a complex multi-architecture build, which
is a bit beyond them.
Instead, the repo includes a Cargo extension called xtask that namespaces our
custom build commands.
cargo xtask dist TOMLFILE builds a distribution image for the
application described by the TOML file.
cargo xtask dist app/demo-stm32f4-discovery/app.toml- stm32f4-discoverycargo xtask dist app/demo-stm32f4-discovery/app-f3.toml- stm32f3-discoverycargo xtask dist app/lpc55xpresso/app.toml- lpcxpresso55s69cargo xtask dist app/demo-stm32g0-nucleo/app-g031.toml- stm32g031-nucleocargo xtask dist app/demo-stm32g0-nucleo/app-g070.toml- stm32g070-nucleocargo xtask dist app/demo-stm32h7-nucleo/app-h743.toml- nucleo-ih743zi2cargo xtask dist app/demo-stm32h7-nucleo/app-h753.toml- nucleo-ih753zicargo xtask dist app/gemini-bu/app.toml- Gemini bringup board
Iterating
Because a full image build can take 10 seconds or more, depending on what you've
changed, when you're iterating on a task or kernel you'll probably want to build
it separately. This is what cargo xtask build is for.
For instance, to build task-ping as it would be built in one of the images, but
without building the rest of the demo, run:
$ cargo xtask build app/gimletlet/app.toml ping
Running clippy
The cargo xtask clippy subcommand can be used to run clippy against one or
more tasks in the context of a particular image:
$ cargo xtask clippy app/gimletlet/app.toml ping pong
Integrating with rust-analyzer
The Hubris build system will not work with rust-analyzer out of the box.
However, cargo xtask lsp is here to help: it takes as its argument a Rust
file, and returns JSON-encoded configuration for how to set up rust-analyzer.
To use this data, some editor configuration is required!
(we haven't made plugins yet, but it would certainly be possible)
Using Neovim and rust-tools,
here's an example configuration:
-- monkeypatch rust-tools to correctly detect our custom rust-analyzer
require'rust-tools.utils.utils'.is_ra_server = function (client)
local name = client.name
local target = "rust_analyzer"
return string.sub(client.name, 1, string.len(target)) == target
or client.name == "rust_analyzer-standalone"
end
-- Configure LSP through rust-tools.nvim plugin, with lots of bonus
-- content for Hubris compatibility
local cache = {}
local clients = {}
require'rust-tools'.setup{
tools = { -- rust-tools options
autoSetHints = true,
inlay_hints = {
show_parameter_hints = false,
parameter_hints_prefix = "",
other_hints_prefix = "",
-- do other configuration here as desired
},
},
server = {
on_new_config = function(new_config, new_root_dir)
local bufnr = vim.api.nvim_get_current_buf()
local bufname = vim.api.nvim_buf_get_name(bufnr)
local dir = new_config.root_dir()
if string.find(dir, "hubris") then
-- Run `xtask lsp` for the target file, which gives us a JSON
-- dictionary with bonus configuration.
local prev_cwd = vim.fn.getcwd()
vim.cmd("cd " .. dir)
local cmd = dir .. "/target/debug/xtask lsp "
-- Notify `xtask lsp` of existing clients in the CLI invocation,
-- so it can check against them first (which would mean a faster
-- attach)
for _,v in pairs(clients) do
local c = vim.fn.escape(vim.json.encode(v), '"')
cmd = cmd .. '-c"' .. c .. '" '
end
local handle = io.popen(cmd .. bufname)
handle:flush()
local result = handle:read("*a")
handle:close()
vim.cmd("cd " .. prev_cwd)
-- If `xtask` doesn't know about `lsp`, then it will print an error to
-- stderr and return nothing on stdout.
if result == "" then
vim.notify("recompile `xtask` for `lsp` support", vim.log.levels.WARN)
end
-- If the given file should be handled with special care, then
-- we give the rust-analyzer client a custom name (to prevent
-- multiple buffers from attaching to it), then cache the JSON in
-- a local variable for use in `on_attach`
local json = vim.json.decode(result)
if json["Ok"] ~= nil then
new_config.name = "rust_analyzer_" .. json.Ok.hash
cache[bufnr] = json
table.insert(clients, {toml = json.Ok.app, task = json.Ok.task})
else
-- TODO:
-- vim.notify(vim.inspect(json.Err), vim.log.levels.ERROR)
end
end
end,
on_attach = function(client, bufnr)
local json = cache[bufnr]
if json ~= nil then
loc
