Ractor
Rust actor framework
Install / Use
/learn @slawlor/RactorREADME
ractor
<p align="center"> <img src="https://raw.githubusercontent.com/slawlor/ractor/main/docs/ractor_logo.svg" width="50%" /> </p>Pronounced rak-ter
A pure-Rust actor framework. Inspired from Erlang's gen_server, with the speed + performance of Rust!
- <img alt="github" src="https://img.shields.io/badge/github-slawlor/ractor-8da0cb?style=for-the-badge&labelColor=555555&logo=github" height="20">
- <img alt="crates.io" src="https://img.shields.io/crates/v/ractor.svg?style=for-the-badge&color=fc8d62&logo=rust" height="20">
- <img alt="docs.rs" src="https://img.shields.io/badge/docs.rs-ractor-66c2a5?style=for-the-badge&labelColor=555555&logo=docs.rs" height="20">
- <img alt="docs.rs" src="https://img.shields.io/badge/docs.rs-ractor_cluster-66c2a5?style=for-the-badge&labelColor=555555&logo=docs.rs" height="20">
ractor:ractor_cluster:
Runtime semantics
See the detailed runtime semantics and guarantees in docs/runtime-semantics.md for priority channels, supervision semantics, RPC/cluster behaviors, and recommended best practices: Semantics
Updates
- Website: Ractor has a companion website for more detailed getting-started guides along with some best practices and is updated regularly. Api docs will still be available at docs.rs however this will be a supplimentary site for
ractor. Try it out! https://slawlor.github.io/ractor/ - RustConf'24 Ractor was a key part of a presentation at RustConf'24. It's used as the basis for Meta's Rust thrift overload protection scheme. The presentation's slides are available here.
About
ractor tries to solve the problem of building and maintaining an Erlang-like actor framework in Rust. It gives
a set of generic primitives and helps automate the supervision tree and management of our actors along with the traditional actor message processing logic. It was originally designed to use the tokio runtime, however does now support the async-std runtime.
ractor is a modern actor framework written in 100% Rust.
Additionally ractor has a companion library, ractor_cluster which is needed for ractor to be deployed in a distributed (cluster-like) scenario. ractor_cluster shouldn't be considered production ready, but it is relatively stable and we'd love your feedback!
Why ractor?
There are other actor frameworks written in Rust (Actix, riker, or just actors in Tokio) plus a bunch of others like this list compiled on this Reddit post.
Ractor tries to be different by modelling more on a pure Erlang gen_server. This means that each actor can also simply be a supervisor to other actors with no additional cost (simply link them together!). Additionally we're aiming to maintain close logic with Erlang's patterns, as they work quite well and are well utilized in the industry.
Additionally we wrote ractor without building on some kind of "Runtime" or "System" which needs to be spawned. Actors can be run independently, in conjunction with other basic tokio runtimes with little additional overhead.
We currently have full support for:
- Single-threaded message processing
- Actor supervision tree
- Remote procedure calls to actors in the
rpcmodule - Timers in the
timemodule - Named actor registry (
registrymodule) from Erlang'sRegistered processes - Process groups (
ractor::pgmodule) from Erlang'spgmodule
On our roadmap is to add more of the Erlang functionality including potentially a distributed actor cluster.
Performance
Actors in ractor are generally quite lightweight and there are benchmarks which you are welcome to run on your own host system with:
cargo bench -p ractor
Further performance improvements are being tracked in #262
Installation
Install ractor by adding the following to your Cargo.toml dependencies.
[dependencies]
ractor = "0.15"
The minimum supported Rust version (MSRV) of ractor is 1.64. However to utilize the native async fn support in traits and not rely on the async-trait crate's desugaring functionliaty, you need to be on Rust version >= 1.75. The stabilization of async fn in traits was recently added.
Features
ractor exposes the following features:
cluster, which exposes various functionality required forractor_clusterto set up and manage a cluster of actors over a network link. This is work-in-progress and is being tracked in #16.async-std, which enables usage ofasync-std's asynchronous runtime instead of thetokioruntime. Howevertokiowith thesyncfeature remains a dependency because we utilize the messaging synchronization primatives fromtokioregardless of runtime as they are not specific to thetokioruntime. This work is tracked in #173. You can remove default features to "minimize" the tokio dependencies to just the synchronization primatives.monitors, Adds support for an erlang-style monitoring api which is an alternative to direct linkage. Akin to Process Monitorsmessage_span_propogation, Propagates the span through the message between actors to keep tracing context.
Working with Actors
Actors in ractor are very lightweight and can be treated as thread-safe. Each actor will only call one of its handler functions at a time, and they will
never be executed in parallel. Following the actor model leads to microservices with well-defined state and processing logic.
An example ping-pong actor might be the following
use ractor::{async_trait, cast, Actor, ActorProcessingErr, ActorRef};
/// [PingPong] is a basic actor that will print
/// ping..pong.. repeatedly until some exit
/// condition is met (a counter hits 10). Then
/// it will exit
pub struct PingPong;
/// This is the types of message [PingPong] supports
#[derive(Debug, Clone)]
pub enum Message {
Ping,
Pong,
}
impl Message {
// retrieve the next message in the sequence
fn next(&self) -> Self {
match self {
Self::Ping => Self::Pong,
Self::Pong => Self::Ping,
}
}
// print out this message
fn print(&self) {
match self {
Self::Ping => print!("ping.."),
Self::Pong => print!("pong.."),
}
}
}
#[async_trait]
// the implementation of our actor's "logic"
impl Actor for PingPong {
// An actor has a message type
type Msg = Message;
// and (optionally) internal state
type State = u8;
// Startup initialization args
type Arguments = ();
// Initially we need to create our state, and potentially
// start some internal processing (by posting a message for
// example)
async fn pre_start(
&self,
myself: ActorRef<Self::Msg>,
_: (),
) -> Result<Self::State, ActorProcessingErr> {
// startup the event processing
cast!(myself, Message::Ping)?;
// create the initial state
Ok(0u8)
}
// This is our main message handler
async fn handle(
&self,
myself: ActorRef<Self::Msg>,
message: Self::Msg,
state: &mut Self::State,
) -> Result<(), ActorProcessingErr> {
if *state < 10u8 {
message.print();
cast!(myself, message.next())?;
*state += 1;
} else {
println!();
myself.stop(None);
// don't send another message, rather stop the agent after 10 iterations
}
Ok(())
}
}
#[tokio::main]
async fn main() {
let (_actor, handle) = Actor::spawn(None, PingPong, ())
.await
.expect("Failed to start ping-pong actor");
handle
.await
.expect("Ping-pong actor failed to exit properly");
}
which will output
$ cargo run
ping..pong..ping..pong..ping..pong..ping..pong..ping..pong..
$
Messaging actors
The means of communication between actors is that they pass messages to each other. A developer can define any message type which is Send + 'static and it
will be supported by ractor. There are 4 concurrent message types, which are listened to in priority. They are
- Signals: Signals are the highest-priority of all and will interrupt the actor wherever processing currently is (this includes terminating async work). There
is only 1 signal today, which is
Signal::Kill, and it immediately terminates all work. This includes message processing or supervision event processing. - Stop: There is also the pre-defined stop signal. You can give a "stop reason" if you want, but it's optional. Stop is a graceful exit, meaning cur
