SkillAgentSearch skills...

Iristorm

Iristorm is an extensible asynchronous header-only framework. It is written in pure modern C++ (free of AI-generated code), implementing a M:N task scheduler (with coroutine support for C++ 20 optionally) and an advanced DAG-based task dispatcher. It also provides a simple lua-cpp wrapper for extending your program with scripts.

Install / Use

/learn @paintdream/Iristorm
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Iristorm

Iristorm is an extensible asynchronous header-only framework written in pure modern C++. It provides:

  • M:N Warp-based Task Scheduler — a flexible task scheduling system inspired by Boost.Asio strands, mapping N logical warps to M worker threads with automatic mutual exclusion.
  • C++20 Coroutine Integration — first-class co_await support for warp switching, task awaiting, barriers, events, and resource quotas.
  • Lua Binding System — a reflection-based C++17 binding layer for exposing C++ types, methods, properties, and coroutines to Lua with minimal boilerplate.
  • DAG-based Task Dispatcher — a task graph for dispatching tasks with partial-order dependencies.

Table of Contents

Build

Iristorm is header-only. The only thing you need to do is to include the corresponding header files.

Most Iristorm classes work with C++11-compatible compilers, except for some optional features:

  • Lua Binding support requires the C++17 if-constexpr feature. (Visual Studio 2017+, GCC 7+, Clang 3.9+)
  • Coroutine support for the thread pool scheduler requires the C++20 standard coroutine feature. (Visual Studio 2019+, GCC 11+, Clang 14+)

All examples can be built by CMake build system, see CMakeLists.txt for more details.

License

Iristorm is distributed under MIT License.

Concepts

Iristorm provides a simple M:N task scheduler called Warp System which is inspired by Boost Strand System. Let's start illustrating it from basic concepts.

Task

A task is the logical execution unit in the concept of application development. Usually it is represented by a function pointer.

Thread

A thread is a native execution unit provided by operating system. Tasks must be run in threads. Different threads are considered to be possibly running at the same time.

Multi-threading, which aims to run several threads within a program, is an effective approach to making full use of CPUs in many-core systems. Usually it's very hard to code and debug. Therefore, there are many data structures, programming patterns, and frameworks to simplify the coding process and make it easier for developers. This project is one of them.

Thread Pool

Threads are heavy. It is not efficient to run every task by invoking a brand-new thread. A thread pool is a type of multi-threading framework that can make this more efficient. A thread pool maintains a set of threads called "Worker Threads" reused for running tasks. When a new task is required to be run, the thread pool can schedule it to a proper worker thread if there is an idle one, or queue it until any worker becomes idle.

Warp

Some tasks are going to read/write at the same objects, or visiting the same thread-unsafe interfaces, indicating that they are not able to run at the same time. See RACE Condition for details. Here we just call them conflicting tasks.

To make our programs run correctly, we must establish some techniques to prevent unexpected conflicts. Here we introduce a new concept: Warp.

A warp is a logical container of a series of conflicting tasks. Tasks belonging to the same warp are guaranteed to be mutually exclusive automatically, and thus no two of them can be run at the same time, avoiding race conditions proactively. This feature is called warp restriction. To make coding easier, we can bind all tasks related to a specific object to a specific warp. In this case, we say that this object is fully bound to a warp context.

Besides, tasks among different warps can be run at the same time respectively.

Warp System

The Warp System is a bridge between warps and the thread pool. That is, programmers commit tasks labeled by warp to the system, which then schedules them to a thread pool. With some magic techniques applied internally, we finally construct a conflict-free task flow.

The thread count M of Warp System is fixed when it starts. But the warp count N can be dynamically adjusted by programmers at will. So the warp system is a type of flexible M:N task mapping system.

Quick Start

Let's start with simple programs in iris_dispatcher_demo.cpp.

Basic Example: simple explosion

The Warp System runs on a thread pool, and the first step is to create it. There is a built-in thread pool written in C++11 std::thread in iris_dispatcher.h, you can replace it with your own platform-specific implementation.

static const size_t thread_count = 4;
iris_async_worker_t<> worker(thread_count);

Then we initialize the warps. There is no "warp system class". Each warp is individual, just create a vector of them. We call them warp 0, warp 1, etc.

Different from boost strands, the tasks in a warp are NOT ordered by default, which means the final execution order is not the same as the order of committing. You can still enable ordering as you like anyway (see declaration of "strand_t" in the following code), which is not recommended because ordering may be slightly less efficient than the default setting.

static const size_t warp_count = 8;
using warp_t = iris_warp_t<iris_async_worker_t<>>;
using strand_t = iris_warp_t<iris_async_worker_t<>, true>; // behaves like a strand

std::vector<warp_t> warps;
warps.reserve(warp_count);
for (size_t i = 0; i < warp_count; i++) {
	warps.emplace_back(worker); // calls iris_warp_t::iris_warp_t(iris_async_worker_t<>&)
}

Then we can schedule a task into the warp you want. Just call queue_routine.

warps[0].queue_routine([]() {/* operations on warps[0] */});
warps[0].queue_routine([]() {/* operations on warps[0] */});

That's all you need to do. According to warp restrictions, operation A and operation B are never executed at the same time, since they are in the same warp.

Otherwise, if we queue_routine tasks to different warps, like:

warps[0].queue_routine([]() { /* do operation C */});
warps[1].queue_routine([]() { /* do operation D */});

According to warp restrictions, operation C and operation D could be executed at the same time.

Here is an "explosion" example. In this example, we code a function called "explosion", which randomly forks multiple recursions of writing operations on an integer array described here:

static int32_t warp_data[warp_count] = { 0 };

The restriction is that warp 0 can only write warp_data[0], warp 1 can only write warp_data[1]:

std::function<void()> explosion;
static constexpr size_t split_count = 4;
static constexpr size_t terminate_factor = 100;

explosion = [&warps, &explosion, &worker]() {
	if (worker.is_terminated())
		return;

	warp_t& current_warp = *warp_t::get_current_warp();
	size_t warp_index = &current_warp - &warps[0];
	warp_data[warp_index]++;

	// simulate working
	std::this_thread::sleep_for(std::chrono::milliseconds(rand() % 40));
	warp_data[warp_index]++;

	if (rand() % terminate_factor == 0) {
		// randomly terminates
		worker.terminate();
	}

	warp_data[warp_index]++;
	// randomly dispatch to warp
	for (size_t i = 0; i < split_count; i++) {
		warps[rand() % warp_count].queue_routine(std::function<void()>(explosion));
	}

	warp_data[warp_index] -= 3;
};

Though there are no locks or atomics on operating warp_data, we can still assert that the final value of each warp_data must be 0. The execution of the same warp never overlaps in the timeline.

Advanced Example: garbage collection

There is a function named garbage_collection, which simulates a multi-threaded mark-sweep garbage collection process.

Garbage collection is a technique for collecting unreferenced objects and deleting them. Mark-sweep is a basic approach for garbage collection. It contains three steps:

  1. Scanning all objects and mark them unvisited.
  2. Traverse from root objects through reference relationships, mark all objects that can be directly or indirectly referenced to visited.
  3. Rescanning all objects, delete the objects with unvisited mark. Thus all objects that are not linked to root objects (i.e. garbage) are deleted.

Now suppose we got the definition of basic object node as follows:

struct node_t {
	size_t warp_index = 0;
	size_t visit_count = 0; // we do not use std::atomic<> here.
	std::vector<size_t> references;
};

struct graph_t {
	std::vector<node_t> nodes;
};

To apply garbage collection, we need to record every references from the current node, and traverse them from root object as collecting. We use visit_count to record whether the current node is visited.

If you are experienced in multi-threaded programming, you may figure out that visit_count should be of type std::atomic<siz

View on GitHub
GitHub Stars16
CategoryCustomer
Updated2h ago
Forks3

Languages

C

Security Score

95/100

Audited on Mar 31, 2026

No findings