SkillAgentSearch skills...

Executors

C++ library for executors

Install / Use

/learn @chriskohlhoff/Executors
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

A C++14 library for executors

NOTE:

The library described below is more extensive than that included in the latest standards proposal. To see what forms part of the proposal, visit http://chriskohlhoff.github.io/executors/.

This is a potential standard library proposal that covers:

  • Executors
    • Executors and schedulers
    • Resumable functions / coroutines
    • A model for asynchronous operations
    • An alternative to std::async()
  • Timers
  • Channels

It has been tested with g++ 4.8.2, g++ 4.9 and clang 3.4, each using the -std=c++1y compiler option.

Executors

The central concept of this library is the executor. An executor embodies a set of rules about where, when and how to run a function object. For example:

Type of executor | Where, when and how ---------------- | ------------------- System | Any thread in the process. Thread pool | Any thread in the pool, and nowhere else. Strand | Not concurrent with any other function object sharing the strand, and in FIFO order. Future / Promise | Any thread. Capture any exceptions thrown by the function object and store them in the promise.

Executors are ultimately defined by a set of type requirements, so the set of executors isn't limited to those listed here. Like allocators, library users can develop custom executor types to implement their own rules.

To submit a function object to an executor, we can choose from one of three fundamental operations: dispatch, post and defer. These operations differ in the eagerness with which they run the submitted function.

A dispatch operation is the most eager, and used when we want to run a function object according to an executor’s rules, but in the cheapest way available:

void f1()
{
  std::cout << "Hello, world!\n";
}

// ...

dispatch(ex, f1);

By performing a dispatch operation, we are giving the executor ex the option of having dispatch() run the submitted function object before it returns. Whether an executor does this depends on its rules:

Type of executor | Behaviour of dispatch ---------------- | --------------------- System | Always runs the function object before returning from dispatch(). Thread pool | If we're inside the thread pool, runs the function object before returning from dispatch(). Otherwise, adds to the thread pool's work queue. Strand | If we're inside the strand, or if the strand queue is empty, runs the function object before returning from dispatch(). Otherwise, adds to the strand's work queue. Future / Promise | Wraps the function object in a try/catch block, and runs it before returning from dispatch().

The consequence of this is that, if the executor’s rules allow it, the compiler is able to inline the function call.

A post operation, on the other hand, is not permitted to run the function object itself.

post(ex, f1);

A posted function is scheduled for execution as soon as possible, but according to the rules of the executor:

Type of executor | Behaviour of post ---------------- | ----------------- System | Adds the function object to a system thread pool's work queue. Thread pool | Adds the function object to the thread pool's work queue. Strand | Adds the function object to the strand's work queue. Future / Promise | Wraps the function object in a try/catch block, and adds it to the system work queue.

Finally, the defer operation is the least eager of the three.

defer(ex, f1);

A defer operation is similar to a post operation, except that it implies a relationship between the caller and the function object being submitted. It is intended for use when submitting a function object that represents a continuation of the caller.

Type of executor | Behaviour of defer ---------------- | ------------------ System | If the caller is executing within the system-wide thread pool, saves the function object to a thread-local queue. Once control returns to the system thread pool, the function object is scheduled for execution as soon as possible. If the caller is not inside the system thread pool, behaves as a post operation. Thread pool | If the caller is executing within the thread pool, saves the function object to a thread-local queue. Once control returns to the thread pool, the function object is scheduled for execution as soon as possible. If the caller is not inside the specified thread pool, behaves as a post operation. Strand | Adds the function object to the strand's work queue. Future / Promise | Wraps the function object in a try/catch block, and delegates to the system executor for deferral.

Posting functions to a thread pool

As a simple example, let us consider how to implement the Active Object design pattern using the executors library. In the Active Object pattern, all operations associated with an object are run on its own private thread.

class bank_account
{
  int balance_ = 0;
  std::experimental::thread_pool pool_{1};

public:
  void deposit(int amount)
  {
    post(pool_, [=]
      {
        balance_ += amount;
      });
  }

  void withdraw(int amount)
  {
    post(pool_, [=]
      {
        if (balance_ >= amount)
          balance_ -= amount;
      });
  }
};

Full example: bank_account_1.cpp

First, we create a private thread pool with a single thread:

std::experimental::thread_pool pool_{1};

A thread pool is an example of an execution context. An execution context represents a place where function objects will be executed. This is distinct from an executor which, as an embodiment of a set of rules, is intended to be a lightweight object that is cheap to copy and wrap for further adaptation.

To add the function to the thread pool's queue, we use a post operation:

post(pool_, [=]
  {
    if (balance_ >= amount)
      balance_ -= amount;
  });

For convenience the post function is overloaded for execution contexts, such as thread_pool, to take care of obtaining the executor for you. The above call is equivalent to:

post(pool_.get_executor(), [=]
  {
    if (balance_ >= amount)
      balance_ -= amount;
  });

Waiting for function completion

When implementing the Active Object pattern, we will normally want to wait for the operation to complete. To do this we can reimplement our bank_account member functions to pass an additional completion token to the free function post(). A completion token specifies how we want to be notified when the function finishes. For example:

void withdraw(int amount)
{
  std::future<void> fut = std::experimental::post(pool_, [=]
    {
      if (balance_ >= amount)
        balance_ -= amount;
    },
    std::experimental::use_future);
  fut.get();
}

Full example: bank_account_2.cpp

Here, the use_future completion token is specified. When passed the use_future token, the free function post() returns the result via a std::future.

Other types of completion token include plain function objects (used as callbacks), resumable functions or coroutines, and even user-defined types. If we want our active object to accept any type of completion token, we simply change the member functions to accept the token as a template parameter:

template <class CompletionToken>
auto withdraw(int amount, CompletionToken&& token)
{
  return std::experimental::post(pool_, [=]
    {
      if (balance_ >= amount)
        balance_ -= amount;
    },
    std::forward<CompletionToken>(token));
}

Full example: bank_account_3.cpp

The caller of this function can now choose how to receive the result of the operation, as opposed to having a single strategy hard-coded in the bank_account implementation. For example, the caller could choose to receive the result via a std::future:

bank_account acct;
// ...
std::future<void> fut = acct.withdraw(10, std::experimental::use_future);
fut.get();

or callback:

acct.withdraw(10, []{ std::cout << "withdraw complete\n"; });

or any other type that meets the completion token requirements. This approach also works for functions that return a value:

class bank_account
{
  // ...

  template <class CompletionToken>
  auto balance(CompletionToken&& token) const
  {
    return std::experimental::post(pool_, [=]
      {
        return balance_;
      },
      std::forward<CompletionToken>(token));
  }
};

When using use_future, the future's value type is determined automatically from the executed function's return type:

std::future<int> fut = acct.balance(std::experimental::use_future);
std::cout << "balance is " << fut.get() << "\n";

Similarly, when using a callback, the function's result is passed as an argument:

acct.balance([](int bal){ std::cout << "balance is " << bal << "\n"; });

Limiting concurrency using strands

Clearly, having a private thread for each bank_account is not going to scale well to thousands or millions of objects. We may instead want all bank accounts to share a thread pool. The system_executor object provides access to a system thread pool which we can use for this purpose:

std::experimental::system_executor ex;
post(ex, []{ std::cout << "Hello, world!\n"; });

However, the system thread pool uses an unspecified number of threads, and the posted function could run on any of them. The original reason for using the Active Object pattern was to limit the bank_account object's internal logic to run on a single thread. Fortun

Related Skills

View on GitHub
GitHub Stars514
CategoryDevelopment
Updated4d ago
Forks76

Languages

C++

Security Score

95/100

Audited on Mar 21, 2026

No findings