EvoAgentX
๐ EvoAgentX: Building a Self-Evolving Ecosystem of AI Agents
Install / Use
/learn @EvoAgentX/EvoAgentXREADME
<a href="./README.md" style="text-decoration: underline;">English</a> | <a href="./README-zh.md">็ฎไฝไธญๆ</a>
</h3> </div>What is EvoAgentX
EvoAgentX is an open-source framework for building, evaluating, and evolving LLM-based agents or agentic workflows in an automated, modular, and goal-driven manner. At its core, EvoAgentX enables developers and researchers to move beyond static prompt chaining or manual workflow orchestration. It introduces a self-evolving agent ecosystem, where AI agents can be constructed, assessed, and optimized through iterative feedback loopsโmuch like how software is continuously tested and improved.
โจ Key Features
-
๐งฑ Agent Workflow Autoconstruction
From a single prompt, EvoAgentX builds structured, multi-agent workflows tailored to the task.
-
๐ Built-in Evaluation
It integrates automatic evaluators to score agent behavior using task-specific criteria.
-
๐ Self-Evolution Engine
Agents donโt just workโthey learn. EvoAgentX improves workflows using self-evolving algorithms.
-
๐งฉ Plug-and-Play Compatibility
Easily integrate original OpenAI and qwen or other popular models, including Claude, Deepseek, kimi models through (LiteLLM, siliconflow or openrouter). If you want to use LLMs locally deployed on your own machine, you can try LiteLLM.
-
๐งฐ Comprehensive Built-in Tools
EvoAgentX ships with a rich set of built-in tools that empower agents to interact with real-world environments.
-
๐ง Memory Module
EvoAgentX supports both ephemeral (short-term) and persistent (long-term) memory systems.
-
๐งโ๐ป Human-in-the-Loop (HITL) Interactions
EvoAgentX supports interactive workflows where humans review, correct, and guide agent behavior.
๐ What You Can Do with EvoAgentX
EvoAgentX isnโt just a framework โ itโs your launchpad for real-world AI agents.
Whether you're an AI researcher, workflow engineer, or startup team, EvoAgentX helps you go from a vague idea to a fully functional agentic system โ with minimal engineering and maximum flexibility.
Hereโs how:
-
๐ Struggling to improve your workflows?
EvoAgentX can automatically evolve and optimize your agentic workflows using SOTA self-evolving algorithms, driven by your dataset and goals. -
๐งโ๐ป Want to supervise the agent and stay in control?
Insert yourself into the loop! EvoAgentX supports Human-in-the-Loop (HITL) checkpoints, so you can step in, review, or guide the workflow as needed โ and step out again. -
๐ง Frustrated by agents that forget everything?
EvoAgentX provides both short-term and long-term memory modules, enabling your agents to remember, reflect, and improve across interactions. -
โ๏ธ Lost in manual workflow orchestration?
Just describe your goal โ EvoAgentX will automatically assemble a multi-agent workflow that matches your intent. -
๐ Want your agents to actually do things?
With a rich library of built-in tools (search, code, browser, file I/O, APIs, and more), EvoAgentX empowers agents to interact with the real world, not just talk about it.
๐ฅ EAX Latest News
-
[Aug 2025] ๐ New Survey Released!
Our team just published a comprehensive survey on Self-Evolving AI Agentsโexploring how agents can learn, adapt, and optimize over time.
๐ Read it on arXiv ๐ Check the repo -
[July 2025] ๐ EvoAgentX Framework Paper is Live!
We officially published the EvoAgentX framework paper on arXiv, detailing our approach to building evolving agentic workflows.
๐ Check it out -
[July 2025] โญ๏ธ 1,000 Stars Reached!
Thanks to our amazing community, EvoAgentX has surpassed 1,000 GitHub stars! -
[May 2025] ๐ Official Launch!
EvoAgentX is now live! Start building self-evolving AI workflows from day one.
๐ง Get Started on GitHub
โก Get Started
- ๐ฅ Latest News
- โก Get Started
- Installation
- LLM Configuration
- Automatic WorkFlow Generation
- EvoAgentX Built-in Tools Summary
- Tool-Enabled Workflows Generation
- Demo Video
- Evolution Algorithms
- Applications
- Tutorial and Use Cases
- ๐ฃ๏ธ EvoAgentX TALK
- ๐ฏ Roadmap
- ๐ Support
- ๐ Contributing to EvoAgentX
- ๐ Citation
- ๐ Acknowledgements
- ๐ License
Installation
We recommend installing EvoAgentX using pip:
pip install evoagentx
or install from source:
pip install git+https://github.com/EvoAgentX/EvoAgentX.git
For local development or detailed setup (e.g., using conda), refer to the Installation Guide for EvoAgentX.
<details> <summary>Example (optional, for local development):</summary>git clone https://github.com/EvoAgentX/EvoAgentX.git
cd EvoAgentX
# Create a new conda environment
conda create -n evoagentx python=3.11
# Activate the environment
conda activate evoagentx
# Install the package
pip install -r requirements.txt
# OR install in development mode
pip install -e .
</details>
LLM Configuration
API Key Configuration
To use LLMs with EvoAgentX (e.g., OpenAI), you must set up your API key.
<details> <summary>Option 1: Set API Key via Environment Variable</summary>- Linux/macOS:
export OPENAI_API_KEY=<your-openai-api-key>
- Windows Command Prompt:
set OPENAI_API_KEY=<your-openai-api-key>
- Windows PowerShell:
$env:OPENAI_API_KEY="<your-openai-api-key>" # " is required
Once set, you can access the key in your Python code with:
import os
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
</details>
<details>
<summary>Option 2: Use .env File</summary>
- Create a .env file in your project root and add the following:
OPENAI_API_KEY=<your-openai-api-key>
Then load it in Python:
from dotenv import load_dotenv
import os
load_dotenv() # Loads environment variables from .env file
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
</details>
<!-- > ๐ Tip: Don't forget to add `.env` to your `.gitignore` to avoid committing secrets. -->
Configure and Use the LLM
Once the API key is set, initialise the LLM with:
from evoagentx.models import OpenAILLMConfig, OpenAILLM
# Load the API key from environment
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Define LLM configuration
openai_config = OpenAILLMConfig(
model="gpt-4o-mini", # Specify the model name
openai_key=OPENAI_API_KEY, # Pass the key directly
stream=True, # Enable streaming response
output_response=True # Print response to stdout
)
# Initialize the language model
llm = OpenAILLM(config=openai_config)
# Generate a response from the LLM
response = llm.generate(prompt="What is Agentic Workflow?")
๐ More details on supported models and config options: LLM module guide.
Automatic WorkFlow Generation
Once your AP
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.2kQQBot ๅฏๅชไฝๆถๅ่ฝๅใไฝฟ็จ <qqmedia> ๆ ็ญพ๏ผ็ณป็ปๆ นๆฎๆไปถๆฉๅฑๅ่ชๅจ่ฏๅซ็ฑปๅ๏ผๅพ็/่ฏญ้ณ/่ง้ข/ๆไปถ๏ผใ
