Atomicx
Pure C++ non stack displacement that implements cooperative multitask library for SINGLE CORE embedded development on DSPs, Microcontrollers and Processor (ARV, RISCV, ARM(all), TENSY, ESP), while also suitable for applications on Windows, Linux and MacOs and compatible with some RTOSs as well. This library allows full event driven applications while uses SMARTs LOCKS and WAIT/NOTIFY locks to also transport messages, MESSAGE BROKER is also provided (Those uses Message type size_t message and size_t tags, where tag will give meaning to the message). That implementation also introduce thread safe QUEUE (full object) and smart_ptr (to allow better implementation on minimal environment)
Install / Use
/learn @solariun/AtomicxREADME
AtomicX
Cooperative multitasking for embedded systems and beyond.

Architecture & Design Document — class diagrams, state machines, thread lifecycle, intrusive controller internals, and stack management details.
AtomicX is a general-purpose cooperative thread library for embedded applications (single-core or confined within another RTOS). It lets you partition your application into multiple controlled execution contexts using cooperative threads — without requiring an operating system, hardware timers, or dynamic memory (unless you opt in).
Key Features
- Zero stack displacement — threads run on the real C stack and only back up the minimum necessary bytes on context switch
- Two stack modes — fixed-size (user-provided buffer, zero heap) or self-managed (auto-resizing via
malloc) - Portable — uses only
setjmp/longjmpandmemcpy; no assembly, no platform-specific code in the core - Rich IPC — Wait/Notify signaling, thread-safe queues, semaphores, read-write mutexes, data pipes (Send/Receive), and broadcast messaging
- RAII wrappers —
smartMutexandsmartSemaphorefor automatic resource release - Tiny footprint — single
.hpp+.cpp, suitable for MCUs with as little as 512 bytes of RAM (e.g., ATtiny85) - Dynamic nice — optional kernel-managed scheduling that auto-tunes thread timing for best performance
Table of Contents
- Getting Started
- Quick Example
- How It Works
- API Reference
- Platform Porting
- Examples
- Architecture & Design
- Supported Platforms
- Changelog
- License
Getting Started
Requirements
- C++11 or later
setjmp.hsupport (available on virtually all C/C++ compilers)
Installation
Arduino: Copy the atomicx/ folder into your Arduino libraries directory, or use the Arduino IDE Library Manager.
PlatformIO: Add the library to your lib/ directory.
PC / Linux / macOS: Include atomicx.hpp and compile atomicx.cpp alongside your project:
g++ -std=c++11 -I atomicx/ atomicx/atomicx.cpp main.cpp -o myapp
Minimal Setup
- Include the header
- Implement two platform functions (
Atomicx_GetTickandAtomicx_SleepTick) - Subclass
thread::atomicx - Call
atomicx::Start()
Quick Example
#include <iostream>
#include <sys/time.h>
#include <unistd.h>
#include "atomicx.hpp"
using namespace thread;
// --- Platform functions (user must implement) ---
atomicx_time Atomicx_GetTick(void) {
struct timeval tp;
gettimeofday(&tp, NULL);
return (atomicx_time)tp.tv_sec * 1000 + tp.tv_usec / 1000;
}
void Atomicx_SleepTick(atomicx_time nSleep) {
usleep((useconds_t)nSleep * 1000);
}
// --- Thread with fixed stack ---
class Blinker : public atomicx {
public:
Blinker() : atomicx(stack) { SetNice(500); }
void run() noexcept override {
int count = 0;
while (Yield()) {
std::cout << "Blink " << ++count << std::endl;
}
}
void StackOverflowHandler() noexcept override {
std::cerr << "Stack overflow in Blinker!" << std::endl;
}
const char* GetName() override { return "Blinker"; }
private:
uint8_t stack[512] = "";
};
// --- Thread with self-managed (auto) stack ---
class Counter : public atomicx {
public:
Counter() : atomicx(128, 64) { SetNice(1000); }
void run() noexcept override {
int n = 0;
while (Yield()) {
std::cout << "Count " << ++n << std::endl;
}
}
void StackOverflowHandler() noexcept override {
std::cerr << "Stack overflow in Counter!" << std::endl;
}
const char* GetName() override { return "Counter"; }
};
int main() {
Blinker b;
Counter c;
atomicx::Start(); // blocks here, running all threads cooperatively
}
How It Works
AtomicX implements stackful cooperative coroutines:
- Construction — When you instantiate a thread object, it automatically registers itself into a global intrusive doubly-linked list. No manual registration needed.
Start()— Enters the kernel loop. The scheduler picks the next thread and either callsrun()(first time) or restores its context.Yield()— The running thread saves its stack segment viamemcpy, saves its CPU context viasetjmp, and jumps back to the scheduler vialongjmp.- Resume — The scheduler restores the stack segment and jumps into the thread's saved context. Execution continues right after
Yield(). - Destruction — When the thread object is destroyed, it automatically removes itself from the scheduler's list.
Thread A Scheduler Thread B
│ │ │
│── Yield() ──────>│ │
│ [save stack] │ │
│ [setjmp+longjmp]│ │
│ │── resume ────────>│
│ │ [restore stack] │
│ │ [longjmp] │
│ │ │── runs...
│ │<── Yield() ──────│
│<── resume ───────│ │
│── runs... │ │
No preemption. Threads must call
Yield()(orWait(), or any blocking IPC call) to give control back to the scheduler. This makes all code between yields atomic with respect to other AtomicX threads.
For comprehensive architecture details, see design.md.
API Reference
Thread Lifecycle
Creating a Thread
Subclass atomicx and implement the required virtual methods:
class MyThread : public thread::atomicx {
public:
// Fixed stack: provide a buffer
MyThread() : atomicx(stack) { SetNice(100); }
// OR self-managed stack: initial size + increase pace
// MyThread() : atomicx(256, 32) { SetNice(100); }
void run() noexcept override {
// Your thread logic. Call Yield() periodically.
while (Yield()) {
// do work
}
}
void StackOverflowHandler() noexcept override {
// Called when stack exceeds buffer (and auto-resize fails)
}
// Optional overrides:
const char* GetName() override { return "MyThread"; }
void finish() noexcept override { /* cleanup after run() returns */ }
private:
uint8_t stack[512] = "";
};
Key Methods
| Method | Description |
|--------|-------------|
| atomicx::Start() | Static. Enters the kernel loop — blocks until all threads finish or deadlock |
| Yield(nSleep) | Context switch. Default sleep = thread's nice value. Pass 0 for immediate return |
| YieldNow() | High-priority yield — this thread gets picked up before normal sleepers |
| SetNice(ms) | Set the default sleep interval between yields (in tick units) |
| SetDynamicNice(true) | Let the kernel auto-tune nice based on actual execution time |
| Stop() / Resume() | Suspend / resume the thread |
| Restart() | Calls finish() and re-enters run() from the beginning |
| Detach() | Calls finish(), removes thread from scheduler permanently |
| GetID() | Returns the thread's unique ID (its memory address) |
| GetName() | Returns the thread name (override to customize) |
| GetStackSize() | Allocated stack buffer size |
| GetUsedStackSize() | Actual stack usage from last context switch |
| IsStackSelfManaged() | true if using auto-stack mode |
| GetStatus() / GetSubStatus() | Current thread state (see state machine in design.md) |
| GetCurrentTick() | Returns the current tick via Atomicx_GetTick() |
| GetLastUserExecTime() | How long the thread ran during its last time slice |
| GetThreadCount() | Number of active threads in the system |
| IsKernelRunning() | true if Start() is currently executing |
Iterating All Threads
for (auto& th : *atomicx::GetCurrent()) {
std::cout << th.GetName() << " stack: " << th.GetUsedStackSize()
<< "/" << th.GetStackSize() << std::endl;
}
Synchronization
Semaphore
atomicx::semaphore sem(3); // max 3 concurrent acquisitions
// In thread:
if (sem.acquire(1000)) { // wait up to 1000 ticks
// critical section
sem.release();
}
// RAII version:
atomicx::smartSemaphore ss(sem);
if (ss.acquire()) {
// auto-released when ss goes out of scope
}
| Method | Description |
|--------|-------------|
| semaphore(maxShared) | Create with max concurrent locks |
| acquire(timeout) | Acquire a slot (0 = wait forever) |
| release() | Release one slot |
| GetCount() | Current acquired count |
| GetWaitCount() | Threads waiting to acquire |
Mutex (Read-Write Lock)
atomicx::mutex mtx;
// Exclusive lock:
if (mtx.Lock(1000)) { // timeout optional
// only this thread has access
mtx.Unlock();
}
// Shared lock (multiple readers):
if (mtx.SharedLock()) {
// read-only access, other shared locks allowed
mtx.SharedUnlock();
}
// RAII version:
atomicx::smartMutex sm(mtx);
if (sm.Lock()) {
// auto-unlocked when sm is destroyed
}
IPC: Wait/Notify
Any variable's address can be used as a sync
