CoroTracer
A cross-language, zero-copy coroutine observability framework based on the cTP shared-memory protocol, utilizing lock-free ring buffers for ultra-low overhead state tracing.
Install / Use
/learn @lixiasky-back/CoroTracerREADME
coroTracer: Cross-Language, Zero-Copy Coroutine Observability

Why I built this: while debugging one of my own M:N schedulers, I ran into an especially nasty failure mode. Under heavy load, throughput would suddenly collapse to zero, but ASAN and TSAN stayed silent because nothing was corrupt in the usual memory-safety sense. It turned out to be a classic
lost wakeup: the coroutine had become logically unreachable, but traditional tooling was terrible at surfacing that kind of state-machine break. coroTracer was built for exactly this class of problem.
coroTracer is an out-of-process coroutine trace collector.
It is designed for M:N coroutine schedulers, with a very specific goal:
- capture coroutine state transitions
- minimize interference with the target process
- emit reusable raw traces
- provide a reliable low-level foundation for later offline analysis and database export
It is not positioned as an APM product or an online analysis platform.
At the moment, this repository is focused on two things:
- safely collecting coroutine state into JSONL
- exporting an existing JSONL trace into SQLite / MySQL / PostgreSQL / CSV
The core safety properties of the collection protocol have also been modeled and proved in Lean 4. Relevant files:
Project status: at this point the project is already usable end to end. The collection, persistence, and export pipeline is working as a closed loop. If I had to point out the one remaining obvious limitation, it is that collection capacity is still based on a fixed finite coroutine count, rather than a dynamically growing capacity. Aside from that, the project is already usable in practice. Updates will continue, but the pace will likely slow down significantly, probably much more than before. This update was focused on data format conversion and export, and did not touch the core collection path. Codex genuinely improved iteration speed a lot here, which helped this release land much faster.
Architecture
+-----------------------+ +-----------------------+
| Target Application | | Go Tracer Engine |
| (C++, Rust, Zig...) | | |
| | [ Lock-Free SHM ] | |
| +-----------------+ | +-----------------+ | +-----------------+ |
| | cTP SDK Probe |=======> | StationData [N] | <=======| Harvester Loop | |
| +-----------------+ | Write +-----------------+ Read | +-----------------+ |
| | ^ | |
| [ Socket ] |---(Wakeup)---UDS---(Listen)---| [ File I/O ] |
+-----------------------+ +-----------------------+
|
v
+------------------+
| trace_output |
| .jsonl |
+------------------+
|
v
+----------------------------------+
| SQLite / MySQL / PostgreSQL / CSV |
+----------------------------------+
Current Capabilities
1. Trace Collection Mode
The Go engine is responsible for:
- creating shared memory
- creating the Unix Domain Socket
- launching the target process
- continuously harvesting coroutine events from shared memory
- writing the result as JSONL
Each JSONL line looks roughly like this:
{"probe_id":123,"tid":456,"addr":"0x0000000000000000","seq":2,"is_active":true,"ts":123456789}
Those fields correspond to the source-level TraceRecord:
probe_id: unique coroutine probe identifiertid: real OS thread IDaddr: suspension address or related coroutine addressseq: slot sequence numberis_active: whether the coroutine is currently activets: timestamp
2. Export Mode
The repository now includes an export/ directory that supports converting an existing JSONL trace into:
- a SQLite database
- a MySQL database
- a PostgreSQL database
- a DataFrame-friendly CSV file
This is explicitly a second-stage export from an existing JSONL trace.
It is not "trace and write to a database at the same time."
3. C++20 SDK
The repository currently ships a C++20 header-only SDK:
Its responsibilities are:
- attaching to shared memory
- attaching to the UDS wakeup channel
- writing coroutine state on suspend / resume
- obeying the cTP memory contract
Core Mechanism
The central design idea is simple:
physically separate the execution plane from the observation plane.
The target process only writes state into shared memory.
The Go collector harvests those states asynchronously from outside the process, instead of pushing complicated tracing logic back into the target.
1. Shared Memory Protocol (cTP)
The protocol-level document is here:
There are three essential ideas:
GlobalHeaderandStationDataare forced into fixed layoutsEpochis aligned to a 64-byte cache line- the writer and reader coordinate through a lock-free
seqdiscipline
2. The C++ Write Protocol
The writer does not simply blast fields into memory without structure.
It follows a strict order:
- first make
seqodd to mark "write in progress" - then write the payload
- finally make
seqeven to mark "write complete"
This corresponds to PromiseMixin::write_trace in SDK/c++/coroTracer.h.
3. The Go Read Protocol
The Go reader also does not trust a slot just because data is present.
It follows three steps:
- read
seqonce - only if
seqis even and newer than locallastSeendoes it copy the payload - read
seqagain after the copy - only if the two
seqvalues match does it write JSONL
This is implemented in:
4. Smart UDS Wakeup
To avoid wasting CPU cycles when traffic is low:
- the Go side sets
TracerSleeping = 1while idle - once the C++ side finishes a write and notices the tracer is sleeping, it sends a 1-byte UDS wakeup signal
This avoids syscall storms under heavy throughput while also avoiding a pure busy-spin under light throughput.
Quick Start
1. Build
go build -o coroTracer main.go
2. Trace a Target Program
./coroTracer -n 256 -cmd "./your_target_app" -out trace.jsonl
This does the following:
- preallocates 256 stations
- launches
./your_target_app - writes the trace into
trace.jsonl
One important constraint:
-cmdmode is collection-only- it does not export into a database in the same run
So collection and export are two separate stages.
3. Integrate the C++ SDK
The target program inherits IPC configuration through environment variables.
The smallest possible integration looks like this:
#include "coroTracer.h"
int main() {
corotracer::InitTracer();
// ... start your scheduler
}
For coroutine promises, you can inherit from PromiseMixin:
struct promise_type : public corotracer::PromiseMixin {
// your business logic
};
The SDK records the state transitions associated with await_suspend and await_resume.
Exporting JSONL
Export mode only works on an already existing JSONL file.
It cannot be used together with -cmd.
So this is allowed:
./coroTracer -export sqlite -in trace.jsonl
But this is not:
./coroTracer -cmd "./your_target_app" -export sqlite
1. Export to SQLite
./coroTracer -export sqlite -in trace.jsonl -sqlite-out trace.sqlite
Notes:
- by default the output filename is derived as
<input>.sqlite - runtime requires a local
sqlite3binary
2. Export to CSV (DataFrame-Friendly)
./coroTracer -export csv -in trace.jsonl -csv-out trace.csv
That CSV can be consumed directly by:
- pandas
- polars
- DuckDB
- R
3. Export to MySQL
./coroTracer \
-export mysql \
-in trace.jsonl \
-db-host 127.0.0.1 \
-db-port 3306 \
-db-user root \
-db-password your_password \
-db-name coro_tracer \
-db-table coro_trace_events
If you use a Unix socket, you can also do:
./coroTracer \
-export mysql \
-in trace.jsonl \
-db-user root \
-db-password your_password \
-mysql-socket /tmp/mysql.sock
Notes:
- runtime requires a local
mysqlCLI - the exporter creates the database and table automatically, then inserts the data
4. Export to PostgreSQL
./coroTracer \
-export postgresql \
-in trace.jsonl \
-db-host 127.0.0.1 \
-db-port 5432 \
-db-user postgres \
-db-password your_password \
-db-name coro_tracer \
-db-table coro_trace_events \
-pg-sslmode disabl
Related Skills
node-connect
347.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
