SkillAgentSearch skills...

AutoAgents

A multi-agent framework written in Rust that enables you to build, deploy, and coordinate multiple intelligent agents

Install / Use

/learn @liquidos-ai/AutoAgents
About this skill

Quality Score

0/100

Category

Operations

Supported Platforms

Universal

README

<div align="center"> <img src="assets/logo.png" alt="AutoAgents Logo" width="200" height="200">

AutoAgents

A production-grade multi-agent framework in Rust

Crates.io Documentation License Build Status codecov Ask DeepWiki Crates.io Downloads (recent) PyPI - Downloads

English | 中文 | 日本語 | Español | Français | Deutsch | 한국어 | Português (Brasil) <br /> <sub>Translations may lag behind the English README.</sub>

Documentation | Examples | Contributing

<br /> <strong>Like this project?</strong> <a href="https://github.com/liquidos-ai/AutoAgents">Star us on GitHub</a> </div>

Overview

AutoAgents is a modular, multi-agent framework for building intelligent systems in Rust. It combines a type-safe agent model with structured tool calling, configurable memory, and pluggable LLM backends. The architecture is designed for performance, safety, and composability across server and edge, and serves as the foundation for higher-level systems like Odyssey.


Key Features

  • Agent execution: ReAct and basic executors, streaming responses, and structured outputs
  • Tooling: Derive macros for tools and outputs, plus a sandboxed WASM runtime for tool execution
  • Memory: Sliding window memory with extensible backends
  • LLM providers: Cloud and local backends behind a unified interface
  • LLM Guardrails: Guardrail implementation for safeguarding LLM inference
  • LLM Optimization: Build LLM pipelines with optimization passes like cache and retry for faster, more reliable inference
  • Multi-agent orchestration: Typed pub/sub communication and environment management
  • Speech-Processing: Local TTS and STT support
  • Observability: OpenTelemetry tracing and metrics with pluggable exporters

Supported LLM Providers

Cloud Providers

| Provider | Status | | ---------------- | ------ | | OpenAI | ✅ | | OpenRouter | ✅ | | Anthropic | ✅ | | DeepSeek | ✅ | | xAI | ✅ | | Phind | ✅ | | Groq | ✅ | | Google | ✅ | | Azure OpenAI | ✅ | | MiniMax | ✅ |

Local Providers

| Provider | Status | | -------------- | ------ | | Ollama | ✅ | | Mistral-rs | ✅ | | Llama-Cpp | ✅ |

Experimental Providers

See https://github.com/liquidos-ai/AutoAgents-Experimental-Backends

| Provider | Status | | -------- | --------------- | | Burn | ⚠️ Experimental | | Onnx | ⚠️ Experimental |

Provider support is actively expanding based on community needs.


Benchmarks

Benchmark

More info at GitHub


Installation

Prerequisites

  • Rust (latest stable recommended)
  • Cargo package manager
  • LeftHook for Git hooks management
  • Python 3.9+ (required for Python bindings)
  • uv for Python environment and package management
  • maturin (required to build/install local Python bindings from source)

Prerequisite

sudo apt update
sudo apt install build-essential libasound2-dev alsa-utils pkg-config libssl-dev -y

Install LeftHook

macOS (Homebrew):

brew install lefthook

Linux/Windows (npm):

npm install -g lefthook

Clone and Build

git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
lefthook install
cargo build --workspace --all-features

Python Bindings

AutoAgents ships Python bindings to PyPI. Install the base package and add backends via extras:

pip install autoagents-py                            # core + cloud LLM providers
pip install "autoagents-py[llamacpp]"               # + llama.cpp CPU
pip install "autoagents-py[llamacpp-cuda]"          # + llama.cpp CUDA
pip install "autoagents-py[llamacpp-metal]"         # + llama.cpp Metal (macOS)
pip install "autoagents-py[llamacpp-vulkan]"        # + llama.cpp Vulkan
pip install "autoagents-py[mistralrs]"              # + mistral-rs CPU
pip install "autoagents-py[mistralrs-cuda]"         # + mistral-rs CUDA
pip install "autoagents-py[mistralrs-metal]"        # + mistral-rs Metal (macOS)
pip install "autoagents-py[guardrails]"             # + Guardrails
pip install "autoagents-py[llamacpp-cuda,guardrails]"  # combine extras

Development install from this repo:

uv venv --python=3.12
source .venv/bin/activate          # Windows: .venv\Scripts\activate
uv pip install -U pip maturin pytest pytest-asyncio pytest-cov

# Clean, build, and install all CPU bindings into the active venv
make python-bindings-build

# Clean, build, and install CPU + CUDA bindings
make python-bindings-build-cuda

The Make targets remove stale editable-install extension artifacts before rebuilding, which avoids loading out-of-date .abi3.so files from the source tree.

Example scripts:

  • Core cloud example: bindings/python/autoagents/examples/openai_agent.py
  • llama.cpp example: bindings/python/autoagents-llamacpp/examples/llamacpp_agent.py
  • mistral-rs example: bindings/python/autoagents-mistralrs/examples/mistral_rs_agent.py

Run Tests

cargo test --features "full" --workspace

Quick Start

use autoagents::core::agent::memory::SlidingWindowMemory;
use autoagents::core::agent::prebuilt::executor::{ReActAgent, ReActAgentOutput};
use autoagents::core::agent::task::Task;
use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT, DirectAgent};
use autoagents::core::error::Error;
use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT};
use autoagents::llm::LLMProvider;
use autoagents::llm::backends::openai::OpenAI;
use autoagents::llm::builder::LLMBuilder;
use autoagents_derive::{agent, tool, AgentHooks, AgentOutput, ToolInput};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;

#[derive(Serialize, Deserialize, ToolInput, Debug)]
pub struct AdditionArgs {
    #[input(description = "Left Operand for addition")]
    left: i64,
    #[input(description = "Right Operand for addition")]
    right: i64,
}

#[tool(
    name = "Addition",
    description = "Use this tool to Add two numbers",
    input = AdditionArgs,
)]
struct Addition {}

#[async_trait]
impl ToolRuntime for Addition {
    async fn execute(&self, args: Value) -> Result<Value, ToolCallError> {
        println!("execute tool: {:?}", args);
        let typed_args: AdditionArgs = serde_json::from_value(args)?;
        let result = typed_args.left + typed_args.right;
        Ok(result.into())
    }
}

#[derive(Debug, Serialize, Deserialize, AgentOutput)]
pub struct MathAgentOutput {
    #[output(description = "The addition result")]
    value: i64,
    #[output(description = "Explanation of the logic")]
    explanation: String,
    #[output(description = "If user asks other than math questions, use this to answer them.")]
    generic: Option<String>,
}

#[agent(
    name = "math_agent",
    description = "You are a Math agent",
    tools = [Addition],
    output = MathAgentOutput,
)]
#[derive(Default, Clone, AgentHooks)]
pub struct MathAgent {}

impl From<ReActAgentOutput> for MathAgentOutput {
    fn from(output: ReActAgentOutput) -> Self {
        let resp = output.response;
        if output.done && !resp.trim().is_empty() {
            if let Ok(value) = serde_json::from_str::<MathAgentOutput>(&resp) {
                return value;
            }
        }
        MathAgentOutput {
            value: 0,
            explanation: resp,
            generic: None,
        }
    }
}

pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> {
    let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));

    let agent_handle = AgentBuilder::<_, DirectAgent>::new(ReActAgent::new(MathAgent {}))
        .llm(llm)
        .memory(sliding_window_memory)
        .build()
        .await?;

    let result = agent_handle.agent.run(Task::new("What is 1 + 1?")).await?;
    println!("Result: {:?}", result);
    Ok(())
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    let api_key = std::env::var("OPENAI_API_KEY").unwrap_or("".into());

    let llm: Arc<OpenAI> = LLMBuilder::<OpenAI>::new()
        .api_key(api_key)
        .model("gpt-4o")
        .max_tokens(512)
        .temperature(0.2)
        .build()
        .expect("Failed to build LLM");

    let _ = simple_agent(llm).await?;
    Ok(())
}

AutoAgents CLI

AutoAgents CLI helps in running Agentic Workflows from YAML configurations and serves them over HTTP. You can check it out at https://github.com/liquidos-ai/AutoAgents-CLI.


Examples

Explore the examples to get started quickly:

Basic

Demonstrates various examples like Simple Agent with Tools, Very Basic Agent, Edge Agent, Chaining, Actor Based Model, Streaming and Adding Agent Hooks.

LLM Pipelines

Demonstrates LLM pipelines with optimization passes

View on GitHub
GitHub Stars469
CategoryOperations
Updated3h ago
Forks61

Languages

Rust

Security Score

100/100

Audited on Mar 23, 2026

No findings