SkillAgentSearch skills...

EvoAgentX

๐Ÿš€ EvoAgentX: Building a Self-Evolving Ecosystem of AI Agents

Install / Use

/learn @EvoAgentX/EvoAgentX

README

<!-- Add logo here --> <div align="center"> <a href="https://github.com/EvoAgentX/EvoAgentX"> <img src="./assets/EAXLoGo.svg" alt="EvoAgentX" width="50%"> </a> </div> <h2 align="center"> Building a Self-Evolving Ecosystem of AI Agents </h2> <div align="center">

EvoAgentX Homepage Docs Discord Twitter Wechat GitHub star chart GitHub fork License

<!-- [![EvoAgentX Homepage](https://img.shields.io/badge/EvoAgentX-Homepage-blue?logo=homebridge)](https://EvoAgentX.github.io/EvoAgentX/) --> <!-- [![hf_space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-EvoAgentX-ffc107?color=ffc107&logoColor=white)](https://huggingface.co/EvoAgentX) --> </div> <div align="center"> <h3 align="center">

<a href="./README.md" style="text-decoration: underline;">English</a> | <a href="./README-zh.md">็ฎ€ไฝ“ไธญๆ–‡</a>

</h3> </div>

What is EvoAgentX

EvoAgentX is an open-source framework for building, evaluating, and evolving LLM-based agents or agentic workflows in an automated, modular, and goal-driven manner. At its core, EvoAgentX enables developers and researchers to move beyond static prompt chaining or manual workflow orchestration. It introduces a self-evolving agent ecosystem, where AI agents can be constructed, assessed, and optimized through iterative feedback loopsโ€”much like how software is continuously tested and improved.

โœจ Key Features

  • ๐Ÿงฑ Agent Workflow Autoconstruction

    From a single prompt, EvoAgentX builds structured, multi-agent workflows tailored to the task.

  • ๐Ÿ” Built-in Evaluation

    It integrates automatic evaluators to score agent behavior using task-specific criteria.

  • ๐Ÿ” Self-Evolution Engine

    Agents donโ€™t just workโ€”they learn. EvoAgentX improves workflows using self-evolving algorithms.

  • ๐Ÿงฉ Plug-and-Play Compatibility

    Easily integrate original OpenAI and qwen or other popular models, including Claude, Deepseek, kimi models through (LiteLLM, siliconflow or openrouter). If you want to use LLMs locally deployed on your own machine, you can try LiteLLM.

  • ๐Ÿงฐ Comprehensive Built-in Tools

    EvoAgentX ships with a rich set of built-in tools that empower agents to interact with real-world environments.

  • ๐Ÿง  Memory Module

    EvoAgentX supports both ephemeral (short-term) and persistent (long-term) memory systems.

  • ๐Ÿง‘โ€๐Ÿ’ป Human-in-the-Loop (HITL) Interactions

    EvoAgentX supports interactive workflows where humans review, correct, and guide agent behavior.

๐Ÿš€ What You Can Do with EvoAgentX

EvoAgentX isnโ€™t just a framework โ€” itโ€™s your launchpad for real-world AI agents.

Whether you're an AI researcher, workflow engineer, or startup team, EvoAgentX helps you go from a vague idea to a fully functional agentic system โ€” with minimal engineering and maximum flexibility.

Hereโ€™s how:

  • ๐Ÿ” Struggling to improve your workflows?
    EvoAgentX can automatically evolve and optimize your agentic workflows using SOTA self-evolving algorithms, driven by your dataset and goals.

  • ๐Ÿง‘โ€๐Ÿ’ป Want to supervise the agent and stay in control?
    Insert yourself into the loop! EvoAgentX supports Human-in-the-Loop (HITL) checkpoints, so you can step in, review, or guide the workflow as needed โ€” and step out again.

  • ๐Ÿง  Frustrated by agents that forget everything?
    EvoAgentX provides both short-term and long-term memory modules, enabling your agents to remember, reflect, and improve across interactions.

  • โš™๏ธ Lost in manual workflow orchestration?
    Just describe your goal โ€” EvoAgentX will automatically assemble a multi-agent workflow that matches your intent.

  • ๐ŸŒ Want your agents to actually do things?
    With a rich library of built-in tools (search, code, browser, file I/O, APIs, and more), EvoAgentX empowers agents to interact with the real world, not just talk about it.

๐Ÿ”ฅ EAX Latest News

  • [Aug 2025] ๐Ÿš€ New Survey Released!
    Our team just published a comprehensive survey on Self-Evolving AI Agentsโ€”exploring how agents can learn, adapt, and optimize over time.
    ๐Ÿ‘‰ Read it on arXiv ๐Ÿ‘‰ Check the repo

  • [July 2025] ๐Ÿ“š EvoAgentX Framework Paper is Live!
    We officially published the EvoAgentX framework paper on arXiv, detailing our approach to building evolving agentic workflows.
    ๐Ÿ‘‰ Check it out

  • [July 2025] โญ๏ธ 1,000 Stars Reached!
    Thanks to our amazing community, EvoAgentX has surpassed 1,000 GitHub stars!

  • [May 2025] ๐Ÿš€ Official Launch!
    EvoAgentX is now live! Start building self-evolving AI workflows from day one.
    ๐Ÿ”ง Get Started on GitHub

โšก Get Started

Installation

We recommend installing EvoAgentX using pip:

pip install evoagentx

or install from source:

pip install git+https://github.com/EvoAgentX/EvoAgentX.git

For local development or detailed setup (e.g., using conda), refer to the Installation Guide for EvoAgentX.

<details> <summary>Example (optional, for local development):</summary>
git clone https://github.com/EvoAgentX/EvoAgentX.git
cd EvoAgentX
# Create a new conda environment
conda create -n evoagentx python=3.11

# Activate the environment
conda activate evoagentx

# Install the package
pip install -r requirements.txt
# OR install in development mode
pip install -e .
</details>

LLM Configuration

API Key Configuration

To use LLMs with EvoAgentX (e.g., OpenAI), you must set up your API key.

<details> <summary>Option 1: Set API Key via Environment Variable</summary>
  • Linux/macOS:
export OPENAI_API_KEY=<your-openai-api-key>
  • Windows Command Prompt:
set OPENAI_API_KEY=<your-openai-api-key>
  • Windows PowerShell:
$env:OPENAI_API_KEY="<your-openai-api-key>" # " is required 

Once set, you can access the key in your Python code with:

import os
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
</details> <details> <summary>Option 2: Use .env File</summary>
  • Create a .env file in your project root and add the following:
OPENAI_API_KEY=<your-openai-api-key>

Then load it in Python:

from dotenv import load_dotenv 
import os 

load_dotenv() # Loads environment variables from .env file
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
</details> <!-- > ๐Ÿ” Tip: Don't forget to add `.env` to your `.gitignore` to avoid committing secrets. -->

Configure and Use the LLM

Once the API key is set, initialise the LLM with:

from evoagentx.models import OpenAILLMConfig, OpenAILLM

# Load the API key from environment
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

# Define LLM configuration
openai_config = OpenAILLMConfig(
    model="gpt-4o-mini",       # Specify the model name
    openai_key=OPENAI_API_KEY, # Pass the key directly
    stream=True,               # Enable streaming response
    output_response=True       # Print response to stdout
)

# Initialize the language model
llm = OpenAILLM(config=openai_config)

# Generate a response from the LLM
response = llm.generate(prompt="What is Agentic Workflow?")

๐Ÿ“– More details on supported models and config options: LLM module guide.

Automatic WorkFlow Generation

Once your AP

Related Skills

View on GitHub
GitHub Stars2.7k
CategoryDevelopment
Updated1h ago
Forks227

Languages

Python

Security Score

85/100

Audited on Apr 6, 2026

No findings