Coco
coco is a simple, stackless, single-threaded, header-only C++20 coroutine library with Go-like concurrency primitives.
Install / Use
/learn @kingluo/CocoREADME
coco
A simple, stackless, single-threaded, header-only C++20 coroutine library with Go-like concurrency primitives.
Overview
coco is a lightweight C++20 coroutine library that brings Go-style concurrency to C++.
- 🚀 Native C++20 coroutines - Leverages standard C++20 coroutine support
- 📦 Header-only - Just include
coco.h, no linking required - 🔄 Go-like primitives - Channels (
chan_t) and wait groups (wg_t) - ⚡ Zero dependencies - Only requires C++20 standard library
- 🎯 Single-threaded - No locks, no data races, cooperative multitasking
- 🔧 Simple scheduler - FIFO queue-based coroutine scheduling
- 🎨 Clean API - Intuitive async/await syntax
Key Features
- Coroutines (
co_t) - Stackless coroutines with join support and exception propagation - Channels (
chan_t<T>) - Type-safe communication channels (buffered and unbuffered) - Wait Groups (
wg_t) - Synchronization primitive for coordinating multiple coroutines - Custom Awaiters - Easy integration with external async systems (io_uring, timers, etc.)
- RAII-friendly - Proper resource management across suspension points
- Extensively tested - Comprehensive test suite with 20+ test files
Requirements
- C++20 compiler with coroutine support:
- GCC 10+ (with
-std=c++20 -fcoroutines) - Clang 14+ (with
-std=c++20) - MSVC 2019+ (with
/std:c++20)
- GCC 10+ (with
- Standard library with
<coroutine>header support
Synopsis
Basic Coroutine Usage
#include "coco.h"
using namespace coco;
// 1. Define a coroutine - must return co_t and use co_await/co_yield/co_return
co_t simple_task(int id) {
std::cout << "Task " << id << " started" << std::endl;
co_yield resched; // Yield control, will be automatically rescheduled
std::cout << "Task " << id << " completed" << std::endl;
co_return;
}
// 2. Start and run coroutines
int main() {
auto task = simple_task(1); // Create the coroutine
task.resume(); // Schedule it for execution
// 3. Must use scheduler to run all coroutines
scheduler_t::instance().run();
return 0;
}
Using Join to Compose Multiple Coroutines
C++20 coroutines have a fundamental limitation: you cannot directly call one coroutine from another and await its result. This is because coroutine keywords (co_await, co_yield, co_return) must appear directly in the coroutine function body.
What DOESN'T Work:
co_t authenticate_user(const std::string& username) {
std::cout << "Authenticating " << username << "..." << std::endl;
co_yield resched;
std::cout << "Authentication successful" << std::endl;
co_return;
}
// ❌ This FAILS - mixing return with coroutine keywords
co_t handle_request_WRONG(const std::string& username) {
if (username.empty()) {
return authenticate_user(username); // Compilation error!
// Error: cannot use 'return' in a coroutine (must use co_return)
}
co_yield resched; // This makes the function a coroutine
co_return;
}
// ❌ This also FAILS - cannot directly await a function call
co_t handle_request_ALSO_WRONG(const std::string& username) {
co_await authenticate_user(username); // Doesn't work as expected!
// The coroutine is created but never scheduled/resumed
co_return;
}
What DOES Work - Using join():
The join() method allows you to properly compose coroutines by starting them and waiting for completion:
// Step 1: Authenticate user
co_t authenticate_user(const std::string& username) {
std::cout << "Authenticating " << username << "..." << std::endl;
co_yield resched; // Simulate async auth
std::cout << "Authentication successful" << std::endl;
co_return;
}
// Step 2: Load user data
co_t load_user_data(int user_id) {
std::cout << "Loading data for user " << user_id << "..." << std::endl;
co_yield resched; // Simulate async database query
std::cout << "User data loaded" << std::endl;
co_return;
}
// Step 3: Process request
co_t process_request(const std::string& request) {
std::cout << "Processing request: " << request << std::endl;
co_yield resched; // Simulate async processing
std::cout << "Request processed" << std::endl;
co_return;
}
// ✅ Correct: Use join() to compose coroutines
co_t handle_user_request(const std::string& username, int user_id, const std::string& request) {
// Shortcut: use go().join() to create, schedule, and join in one expression
co_await go([&](){ return authenticate_user(username); }).join();
co_await go([&](){ return load_user_data(user_id); }).join();
co_await go([&](){ return process_request(request); }).join();
std::cout << "All steps completed!" << std::endl;
co_return;
}
int main() {
auto request_handler = handle_user_request("alice", 123, "GET /data");
request_handler.resume();
scheduler_t::instance().run();
return 0;
}
Why Join is Needed:
- You cannot return a coroutine from another coroutine (compilation error)
- You cannot directly
co_awaita coroutine function call without scheduling it first - You must create the coroutine, schedule it with
resume(), thenco_awaititsjoin() - Shortcut pattern: Use
co_await go(...).join()to create, schedule, and join in one expression - This pattern allows sequential composition of independent coroutines
- Each coroutine runs independently and can be joined when its result is needed
Producer-Consumer with Channels
co_t producer(chan_t<int>& ch) {
for (int i = 0; i < 3; i++) {
std::cout << "Sending: " << i << std::endl;
bool ok = co_await ch.write(i);
if (!ok) {
std::cout << "Channel closed, stopping producer" << std::endl;
break;
}
}
ch.close();
std::cout << "Producer finished" << std::endl;
}
co_t consumer(chan_t<int>& ch, const std::string& name) {
while (true) {
auto result = co_await ch.read();
if (result.has_value()) {
std::cout << name << " received: " << result.value() << std::endl;
} else {
std::cout << name << " channel closed" << std::endl;
break;
}
}
}
int main() {
chan_t<int> ch(1); // Buffered channel with capacity 1
auto prod = producer(ch);
auto cons1 = consumer(ch, "Consumer1");
auto cons2 = consumer(ch, "Consumer2");
// Resume coroutines and run scheduler
prod.resume();
cons1.resume();
cons2.resume();
scheduler_t::instance().run();
std::cout << "---> ALL DONE!" << std::endl;
return 0;
}
Running Multiple Coroutines with WaitGroup
co_t worker(int id, wg_t& wg) {
std::cout << "Worker " << id << " starting" << std::endl;
co_yield resched; // Simulate async work
std::cout << "Worker " << id << " finished" << std::endl;
wg.done(); // Signal completion
co_return;
}
co_t run_workers() {
wg_t wg;
wg.add(3); // Expecting 3 workers
// Start all workers simultaneously
auto w1 = go([&wg](){ return worker(1, wg); });
auto w2 = go([&wg](){ return worker(2, wg); });
auto w3 = go([&wg](){ return worker(3, wg); });
// Wait for all workers to complete
co_await wg.wait();
std::cout << "All workers completed!" << std::endl;
co_return;
}
int main() {
auto main_task = run_workers();
main_task.resume();
scheduler_t::instance().run();
return 0;
}
Installation
coco is a header-only library. Simply copy coco.h to your project:
# Clone the repository
git clone https://github.com/kingluo/coco.git
# Copy the header to your project
cp coco/coco.h /path/to/your/project/
Or include it directly:
#include "path/to/coco.h"
Examples
The examples/ directory contains practical demonstrations:
1. Channel and Wait Group (channel_and_waitgroup.cpp)
Producer-consumer pattern with channels and scheduler usage.
2. Coroutine Join (join_example.cpp)
Demonstrates coroutine composition, join functionality, and exception handling.
3. Web Server (webserver.cpp)
High-performance HTTP server using io_uring for async I/O with custom awaiters.
Building Examples
# Build all examples
make examples
# Or build individually
cd examples/
make
# Run examples
../build/channel_and_waitgroup
../build/join_example
# Run webserver (requires liburing)
make run-webserver
# Test webserver (in another terminal)
curl -i http://localhost:8000
Testing
The library includes a comprehensive test suite with 20+ test files covering:
- Core components (
co_t,chan_t,wg_t) - Integration tests (producer-consumer, pipelines, fan-out/fan-in)
- Channel behavior (Go compatibility, fair distribution, stress tests)
- C++20 coroutine caveats (RAII, stack relocation, lifetime management)
- Bug fixes and edge cases
Running Tests
# Run all tests
make test
# Or run specific test categories
cd tests/
make run-core-tests # Core library tests
make run-channel-tests # Channel behavior tests
make run-wg-tests # Wait group tests
# Run individual tests
make run-co-t
make run-chan-t
make run-integration
See tests/README.md for detailed test documentation.
API Reference
Core Types
co_t - Coroutine Type
The fundamental coroutine type that wraps C++20 coroutine handles.
struct co_t {
std::coroutine_handle<promise_type> handle; // Underlying coroutine handle
void resume(); // Schedule coroutine for execution
join_awaiter join(); // Await coroutine completion
bool is_done() const; // C
