Log.zig
A structured logger for Zig
Install / Use
/learn @karlseguin/Log.zigREADME
Structured Logging for Zig
logz is an opinionated structured logger that outputs to stdout, stderr, a file or a custom writer using logfmt or JSON. It aims to minimize runtime memory allocation by using a pool of pre-allocated loggers.
Metrics
If you're looking for metrics, check out my <a href="https://github.com/karlseguin/metrics.zig">prometheus library for Zig</a>.
Installation
This library supports native Zig module (introduced in 0.11). Add a "logz" dependency to your build.zig.zon.
Zig 0.11
Please use the zig-0.11 branch for a version of the library which is compatible with Zig 0.11.
The master branch of this library follows Zig's master.
Usage
For simple cases, a global logging pool can be configured and used:
// initialize a logging pool
try logz.setup(allocator, .{
.level = .Info,
.pool_size = 100,
.buffer_size = 4096,
.large_buffer_count = 8,
.large_buffer_size = 16384,
.output = .stdout,
.encoding = .logfmt,
});
defer logz.deinit();
// other places in your code
logz.info().string("path", req.url.path).int("ms", elapsed).log();
// The src(@src()) + err(err) combo is great for errors
logz.err().src(@src()).err(err).log();
Alternatively, 1 or more explicit pools can be created:
var requestLogs = try logz.Pool.init(allocator, .{});
defer requestLogs.deinit();
// requestLogs can be shared across threads
requestLogs.err().
string("context", "divide").
float("a", a).
float("b", b).log();
logz.Level.parse([]const u8) ?Level can be used to convert a string into a logz.Level.
The configuration output can be .stdout, .stderr or a .{.file = "PATH TO FILE}. More advanced cases can use logTo(writer: anytype) instead of log().
The configuration encoding can be either logfmt or json.
Important Notes
- Attribute keys are never escaped. logz assumes that attribute keys can be written as is.
- Logz can silently drop attributes from a log entry. This only happens when the attribute exceeds the configured sized (either of the buffer or the buffer + large_buffer) or a large buffer couldn't be created.
- Depending on the
pool_strategyconfiguration, when empty the pool will either dynamically create a logger (.pool_strategy = .create) or return a noop logger (.pool_strategy = .noop). If creation fails, a noop logger will be return and an error is written usingstd.log.err.
Pools and Loggers
Pools are thread-safe.
The following functions returns a logger:
pool.debug()pool.info()pool.warn()pool.err()pool.fatal()pool.logger()pool.loggerL()
The returned logger is NOT thread safe.
Attributes
The logger can log:
fmt(key: []const u8, comptime format: []const u8, values: anytype)string(key: []const u8, value: ?[]const u8)stringZ(key: []const u8, value: ?[*:0]const u8)boolean(key: []const u8, value: ?boolean)int(key: []const u8, value: ?any_int)float(key: []const u8, value: ?any_float)binary(key: []const u8, value: ?[]const u8)- will be url_base_64 encodederr(e: anyerror)- same aserrK("@err", e);errK(key: []const u8, e: anyerror)stringSafe(key: []const u8, value: ?[]const u8)- assumes value doesn't need to be encodedstringSafeZ(key: []const u8, value: ?[*:0]const u8)- assumes value doesn't need to be encodedctx(value: []const u8)- same asstringSafe("@ctx", value)src(value: std.builtin.SourceLocation)- Logs anstd.builtin.SourceLocation, the type of value you get from the@src()builtin.slice(key: []const u8, value: anytype)- writes the slice of values, this callsanyon each valuesliceFmt(key: []const u8, value: anytype, formatter: *const fn(item, writer) error{WriteFailed}!void)- writes the slice of values, this callsformatterfor each item. When using thejsonformat,formatteris ignored andsliceFmtsimply callsslice.any(key: []const u8, value: anytype)- combines all of the above, plus support for structs if they define a "format(self: T, writer: *std.Io.Writer)" method
Log Level
Pools are configured with a minimum log level:
logz.Level.Debuglogz.Level.Infologz.Level.Warnlogz.Level.Errorlogz.Level.Fatallogz.Level.None
When getting a logger for a value lower than the configured level, a noop logger is returned. This logger exposes the same API, but does nothing.
var logs = try logz.Pool.init(allocator, .{.level = .Error});
// this won't do anything
logs.info().bool("noop", true).log();
The noop logger is meant to be relatively fast. But it doesn't eliminate any complicated values you might pass. Consider this example:
var logs = try logz.Pool.init(allocator, .{.level = .None});
try logs.warn().
string("expensive", expensiveCall()).
log();
Although the logger is disabled (the log level is Fatal), the expensiveCall() function is still called. In such cases, it's necessary to use the pool.shouldLog function:
if (pool.shouldLog(.Warn)) {
try logs.warn().
string("expensive", expensiveCall()).
log();
}
Config
Pools use the following configuration. The default value for each setting is show:
pub const Config = struct {
// The number of loggers to pre-allocate.
pool_size: usize = 32,
// Controls what the pool does when empty. It can either dynamically create
// a new Logger, or return the Noop logger.
pool_startegy: .create, // or .noop
// Each logger in the pool is configured with a static buffer of this size.
// An entry that exceeds this size will attempt to expand into the
// large buffer pool. Failing this, attributes will be dropped
buffer_size: usize = 4096,
// The minimum log level to log. `.None` disables all logging
level: logz.Level = .Info,
// Data to prepend at the start of every logged message from this pool
// See the Advanced Usage section
prefix: ?[]const u8 = null,
// Where to write the output: can be either .stdout or .stderr
output: Output = .stdout, // or .stderr, or .{.file = "PATH TO FILE"}
encoding: Encoding = .logfmt, // or .json
// How many large buffers to create
large_buffer_count: u16 = 8,
// Size of large buffers.
large_buffer_size: usize = 16384,
// Controls what the large buffer pool does when empty. It can either
// dynamically create a large buffer, or drop the attribute
large_buffer_startegy: .create, // or .drop
};
Timestamp and Level
When using the debug, info, warn, err or fatal functions, logs will always begin with @ts=$MILLISECONDS_SINCE_JAN1_1970_UTC @l=$LEVEL, such as: @ts=1679473882025 @l=INFO. With JSON encoding, the object will always have the "@ts" and "@l" fields.
Logger Life cycle
The logger is implicitly returned to the pool when log, logTo or tryLog is called. In rare cases where log, logTo or tryLog are not called, the logger must be explicitly released using its release() function:
// This is a contrived example to show explicit release
var l = logz.info();
_ = l.string("key", "value");
// actually, on second thought, I don't want to log anything
l.release();
Method Chaining
Loggers are mutable. The method chaining (aka fluent interface) is purely cosmetic. The following are equivalent:
// chaining
info().int("over", 9000).log();
// no chaining
var l = info();
_ = l.int("over", 9000);
l.log();
tryLog
The call to log can fail. On failure, a message is written using std.log.err. However, log returns void to improve the API's usability (it doesn't require callers to try or catch).
tryLog can be used instead of log. This function returns a !void and will not write to std.log.err on failure.
Advanced Usage
Pre-setup
setup(CONFIG) can be called multiple times, but isn't thread safe. The idea is that, at the very start, setup can be called with a minimal config so that any startup errors can be logged. After startup, but before the full application begins, setup is called a 2nd time with the correct config. Something like:
pub fn main() !void {
var general_purpose_allocator = std.heap.GeneralPurposeAllocator(.{}){};
const allocator = general_purpose_allocator.allocator();
// minimal config so that we can use logz will setting things up
try logz.setup(.{
.pool_size = 2,
.max_size = 4096,
.level = .Warn
});
// can safely call logz functions, since we now have a mimimal setup
const config = loadConfig();
// more startup things here
// ok, now setup our full logger (which we couldn't do until we read
// our config, which could have failed)
try logz.setup(.{
.pool_size = config.log.pool_size,
.max_size = config.log.max_size,
.level = config.log.level
});
...
}
Prefixes
A pool can be configured with a prefix by setting the prefix field of the configuration. When set, all log entries generated by loggers of this pool will contain the prefix.
The prefix is written as-is.
// prefix can be anything []const u8. It doesn't have to be a key=value
// it will not be encoded if needed, and doesn't even have to be a valid string.
var p = try logz.Pool.init(allocator, .{.prefix = "keemun"});
defer p.deinit();
p.info().boolean("tea", true).log();
The above will generate a log line: keemun @ts=TIMESTAMP @l=INFO tea=Y"
When using .json encoding, your prefix must begin the object:
var p = try logz.Pool.init(allocator, .{.prefix = "=={"});
defer p.deinit();
p.info().boolean("tea", true).log();
The above will generate a log line: =={"@ts":TIMESTAMP, "@l":"INFO", "tea":true}
Multi-Use Logger
Rather than having a logger automatically returned to the pool when .log() or `t
Related Skills
node-connect
348.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
348.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
348.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
