Mofa
MoFA - Modular Framework for Agents. Modular, Compositional and Programmable.
Install / Use
/learn @mofa-org/MofaREADME
MoFA Agent Framework
<p align="center"> <img src="docs/images/mofa-logo.png" width="30%"/> </p> <div align="center"> <a href="https://crates.io/crates/mofa-sdk"> <img src="https://img.shields.io/crates/v/mofa-sdk.svg" alt="crates.io"/> </a> <a href="https://pypi.org/project/mofa-core/"> <img src="https://img.shields.io/pypi/v/mofa-core.svg" alt="PyPI"/> </a> <a href="https://github.com/mofa-org/mofa/blob/main/LICENSE"> <img src="https://img.shields.io/github/license/mofa-org/mofa" alt="License"/> </a> <a href="https://docs.rs/mofa-sdk"> <img src="https://img.shields.io/badge/built_with-Rust-dca282.svg?logo=rust" alt="docs"/> </a> <a href="https://github.com/mofa-org/mofa/stargazers"> <img src="https://img.shields.io/github/stars/mofa-org/mofa" alt="GitHub Stars"/> </a> <a href="https://discord.com/invite/hKJZzDMMm9"> <img src="https://img.shields.io/discord/1345678901234567890?color=5865F2&logo=discord&logoColor=white&label=Discord" alt="Discord"/> </a> <a href="https://docs.rs/mofa-sdk"> <img src="https://img.shields.io/docsrs/mofa-sdk" alt="docs.rs"/> </a> </div> <h2 align="center"> <a href="https://mofa.ai/">Website</a> | <a href="https://mofa.ai/docs/0overview/">Quick Start</a> | <a href="https://github.com/mofa-org/mofa">GitHub</a> | <a href="https://hackathon.mofa.ai/">Hackathon</a> | <a href="https://discord.com/invite/hKJZzDMMm9">Community</a> </h2> <p align="center"> <img src="https://img.shields.io/badge/Performance-Extreme-red?style=for-the-badge" /> <img src="https://img.shields.io/badge/Extensibility-Unlimited-orange?style=for-the-badge" /> <img src="https://img.shields.io/badge/Languages-Multi_platform-yellow?style=for-the-badge" /> <img src="https://img.shields.io/badge/Runtime-Programmable-green?style=for-the-badge" /> </p>📋 Table of Contents
- Overview
- Why MoFA?
- Core Architecture
- Core Features
- Quick Start
- Roadmap
- Ecosystem & Related Repos
- Documentation
- Security
- Contributing
- Community
- License
Overview
MoFA (Modular Framework for Agents) is not just another entry in the crowded agent framework landscape. It is the first production-grade framework to achieve "write once, run everywhere" across languages, built for extreme performance, boundless extensibility, and runtime programmability. Through its revolutionary microkernel architecture and innovative dual-layer plugin system (compile-time + runtime), MoFA strikes the elusive balance between raw performance and dynamic flexibility.
What Sets MoFA Apart:</br> ✅ Rust Core + UniFFI: Blazing performance with native multi-language interoperability</br> ✅ Dual-Layer Plugins: Zero-cost compile-time extensions meet hot-swappable runtime scripts</br> ✅ Microkernel Architecture: Clean separation of concerns, effortless to extend</br> ✅ Cloud-Native by Design: First-class support for distributed and edge deployments</br>
Why MoFA?
Performance
- Zero-cost abstractions in Rust
- Memory safety without garbage collection
- Orders of magnitude faster than Python-based frameworks
Polyglot by Design
- Auto-generated bindings for Python, Java, Go, Kotlin, Swift via UniFFI
- Call Rust core logic natively from any supported language
- Near-zero overhead compared to traditional FFI
Runtime Programmability
- Embedded Rhai scripting engine
- Hot-reload business logic without recompilation
- Runtime configuration and rule adjustments
- User-defined extensions on the fly
Dual-Layer Plugin Architecture
- Compile-time plugins: Extreme performance, native integration
- Runtime plugins: Dynamic loading, instant effect
- Support plugin hot loading and version management
Distributed by Nature
- Built on Dora-rs for distributed dataflow
- Seamless cross-process, cross-machine agent communication
- Edge computing ready
Actor-Model Concurrency
- Isolated agent processes via Ractor
- Message-passing architecture
- Battle-tested for high-concurrency workloads
Core Architecture
Microkernel + Dual-Layer Plugin System
MoFA adopts a layered microkernel architecture, achieving extreme extensibility through a dual-layer plugin system:
block-beta
columns 1
block:business["🧩 Business Layer"]
A["User-defined Agents, Workflows, Rules"]
end
space
block:runtime["⚡ Runtime Plugin Layer (Rhai Scripts)"]
B["Dynamic tool registration"]
C["Rule engine & Scripts"]
D["Hot-load logic"]
end
space
block:compile["🔧 Compile-time Plugin Layer (Rust/WASM)"]
E["LLM plugins"]
F["Tool plugins"]
G["Storage & Protocol"]
end
space
block:kernel["🏗️ Microkernel (mofa-kernel)"]
H["Lifecycle management"]
I["Metadata & Communication"]
J["Task scheduling"]
end
business --> runtime
runtime --> compile
compile --> kernel
Advantages of Dual-Layer Plugin System
Compile-time Plugins (Rust/WASM)
- Extreme performance, zero runtime overhead
- Type safety, compile-time error checking
- Support complex system calls and native integration
- WASM sandbox provides secure isolation
Runtime Plugins (Rhai Scripts)
- No recompilation needed, instant effect
- Business logic hot updates
- User-defined extensions
- Secure sandbox execution with configurable resource limits
Combined Power
- Use Rust plugins for performance-critical paths (e.g., LLM inference, data processing)
- Use Rhai scripts for business logic (e.g., rule engines, workflow orchestration)
- Seamless interoperability between both, covering 99% of extension scenarios
Core Features
1. Microkernel Architecture
MoFA adopts a layered microkernel architecture with mofa-kernel at its core. All other features (including plugin system, LLM capabilities, multi-agent collaboration, etc.) are built as modular components on top of the microkernel.
Core Design Principles
- Core Simplicity: The microkernel contains only the most basic functions: agent lifecycle management, metadata system, and dynamic management
- High Extensibility: All advanced features are extended through modular components and plugins, keeping the kernel stable
- Loose Coupling: Components communicate through standardized interfaces, easy to replace and upgrade
Integration with Plugin System
- The plugin system is developed based on the
Plugininterface of the microkernel. All plugins (including LLM plugins, tool plugins, etc.) are integrated through theAgentPluginstandard interface - The microkernel provides plugin registration center and lifecycle management, supporting plugin hot loading and version control
- LLM capabilities are implemented through
LLMPlugin, encapsulating LLM providers as plugins compliant with microkernel specifications
Integration with LLM
- LLM exists as a plugin component of the microkernel, providing standard LLM access capabilities through the
LLMCapabilityinterface - All agent collaboration patterns (chain, parallel, debate, etc.) are built on the microkernel's workflow engine and interact with LLMs through standardized LLM plugin interfaces
- Secretary mode is also implemented based on the microkernel's A2A communication protocol and task scheduling system
2. Dual-Layer Plugins
- Compile-time plugins: Extreme performance, native integration
- Runtime plugins: Dynamic loading, instant effect
- Seamless collaboration between both, covering all scenarios
3. Agent Coordination
- Priority Scheduling: Task scheduling system based on priority levels
- Communication Bus: Built-in inter-agent communication bus
- Workflow Engine: Visual workflow builder and executor
4. LLM and AI Capabilities
- LLM Abstraction Layer: Standardized LLM integration interface
- OpenAI Support: Built-in OpenAI API integration
- ReAct Pattern: Agent framework based on reasoning and action
- Multi-Agent Collaboration: LLM-driven agent coordination, supporting multiple collaboration modes:
- Request-Response: One-to-one deterministic tasks with synchronous replies
- Publish-Subscribe: One-to-many broadcast tasks with multiple receivers
- Consensus: Multi-round negotiation and voting for decision-making
- Debate: Agents alternate speaking to iteratively refine results
- Parallel: Simultaneous execution with automatic result aggregation
- Sequential: Pipeline execution where output flows to the next agent
- Custom: User-defined modes interpreted by the LLM
- Secretary Mode: Provides end-to-end task closed-loop management, including 5 core phases: receive ideas → record todos, clarify requirements → convert to project documents, schedule dispatch → call execution agents, monitor feedback → push key decisions to humans, acceptance report → update todos
</br>Features:
- 🧠 Autonomous task planning and decomposition
- 🔄 Intelligent agent scheduling and orchestration
- 👤 Human intervention at key nodes
- 📊 Full process observability and traceability
- 🔁 Closed-loop feedback and continuous optimization
5. Persistence Layer
- Multiple Backends: Support PostgreSQL, MySQL, and SQLite
- Session Management: Persistent agent session storage
- Memory System: Stateful agent memory management
6. Monitoring & Observability
- Dashboard: Built-in web dashboard with real-time metrics
- Metrics System: Prometheus-compatible metrics system
- Tracing Framework: Distributed tracing system
7. Rhai Script Engine
MoFA integrates the [Rhai](https://github.com/rhaiscript/rh
Related Skills
node-connect
352.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
