Openevolve
Open-source implementation of AlphaEvolve
Install / Use
/learn @algorithmicsuperintelligence/OpenevolveREADME
OpenEvolve
<div align="center"> <img src="openevolve-logo.png" alt="OpenEvolve Logo" width="400">🧬 The most advanced open-source evolutionary coding agent
Turn your LLMs into autonomous code optimizers that discover breakthrough algorithms
<p align="center"> <a href="https://github.com/algorithmicsuperintelligence/openevolve/stargazers"><img src="https://img.shields.io/github/stars/algorithmicsuperintelligence/openevolve?style=social" alt="GitHub stars"></a> <a href="https://pypi.org/project/openevolve/"><img src="https://img.shields.io/pypi/v/openevolve" alt="PyPI version"></a> <a href="https://pypi.org/project/openevolve/"><img src="https://img.shields.io/pypi/dm/openevolve" alt="PyPI downloads"></a> <a href="https://github.com/algorithmicsuperintelligence/openevolve/blob/main/LICENSE"><img src="https://img.shields.io/github/license/algorithmicsuperintelligence/openevolve" alt="License"></a> </p>🚀 Quick Start • Examples • System Messages • Discussions
From random search to state-of-the-art: Watch your code evolve in real-time
</div>Why OpenEvolve?
<table> <tr> <td width="33%">Autonomous Discovery
LLMs don't just optimize—they discover entirely new algorithms. No human guidance needed.
</td> <td width="33%">Proven Results
2-3x speedups on real hardware. State-of-the-art circle packing. Breakthrough optimizations.
</td> <td width="33%">Research Grade
Full reproducibility, extensive evaluation pipelines, and scientific rigor built-in.
</td> </tr> </table>OpenEvolve vs Manual Optimization:
| Aspect | Manual Optimization | OpenEvolve | |--------|-------------------|------------| | Time to Solution | Days to weeks | Hours | | Exploration Breadth | Limited by human creativity | Unlimited LLM creativity | | Reproducibility | Hard to replicate | Fully deterministic | | Multi-objective | Complex tradeoffs | Automatic Pareto optimization | | Scaling | Doesn't scale | Parallel evolution across islands |
Proven Achievements
<div align="center">| Domain | Achievement | Example | |---------------|-------------------|----------------| | GPU Optimization | Hardware-optimized kernel discovery | MLX Metal Kernels | | Mathematical | State-of-the-art circle packing (n=26) | Circle Packing | | Algorithm Design | Adaptive sorting algorithms | Rust Adaptive Sort | | Scientific Computing | Automated filter design | Signal Processing | | Multi-Language | Python, Rust, R, Metal shaders | All Examples |
</div>🚀 Quick Start
Get from zero to evolving code in 30 seconds:
# Install OpenEvolve
pip install openevolve
# The example uses Google Gemini by default (free tier available)
# Get your API key from: https://aistudio.google.com/apikey
export OPENAI_API_KEY="your-gemini-api-key" # Yes, use OPENAI_API_KEY env var
# Run your first evolution!
python openevolve-run.py examples/function_minimization/initial_program.py \
examples/function_minimization/evaluator.py \
--config examples/function_minimization/config.yaml \
--iterations 50
Note: The example config uses Gemini by default, but you can use any OpenAI-compatible provider by modifying the config.yaml. See the configs for full configuration options.
Library Usage
OpenEvolve can be used as a library without any external files:
from openevolve import run_evolution, evolve_function
# Evolution with inline code (no files needed!)
result = run_evolution(
initial_program='''
def fibonacci(n):
if n <= 1: return n
return fibonacci(n-1) + fibonacci(n-2)
''',
evaluator=lambda path: {"score": benchmark_fib(path)},
iterations=100
)
# Evolve Python functions directly
def bubble_sort(arr):
for i in range(len(arr)):
for j in range(len(arr)-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr
result = evolve_function(
bubble_sort,
test_cases=[([3,1,2], [1,2,3]), ([5,2,8], [2,5,8])],
iterations=50
)
print(f"Evolved sorting algorithm: {result.best_code}")
Prefer Docker? See the Installation & Setup section for Docker options.
See It In Action
<details> <summary><b>Circle Packing: From Random to State-of-the-Art</b></summary>Watch OpenEvolve discover optimal circle packing in real-time:
| Generation 1 | Generation 190 | Generation 460 (Final) |
|--------------|----------------|----------------------|
|
|
|
|
| Random placement | Learning structure | State-of-the-art result |
Result: Matches published benchmarks for n=26 circle packing problem.
</details> <details> <summary><b>GPU Kernel Evolution</b></summary>Before (Baseline):
// Standard attention implementation
kernel void attention_baseline(/* ... */) {
// Generic matrix multiplication
float sum = 0.0;
for (int i = 0; i < seq_len; i++) {
sum += query[tid] * key[i];
}
}
After Evolution (2.8x faster):
// OpenEvolve discovered optimization
kernel void attention_evolved(/* ... */) {
// Hardware-aware tiling + unified memory optimization
threadgroup float shared_mem[256];
// ... evolved algorithm exploiting Apple Silicon architecture
}
Performance Impact: 2.8x speedup on Apple M1 Pro, maintaining numerical accuracy.
</details>How OpenEvolve Works
OpenEvolve implements a sophisticated evolutionary coding pipeline that goes far beyond simple optimization:

Core Innovation: MAP-Elites + LLMs
- Quality-Diversity Evolution: Maintains diverse populations across feature dimensions
- Island-Based Architecture: Multiple populations prevent premature convergence
- LLM Ensemble: Multiple models with intelligent fallback strategies
- Artifact Side-Channel: Error feedback improves subsequent generations
Advanced Features
<details> <summary><b>Scientific Reproducibility</b></summary>- Comprehensive Seeding: Every component (LLM, database, evaluation) is seeded
- Default Seed=42: Immediate reproducible results out of the box
- Deterministic Evolution: Exact reproduction of runs across machines
- Component Isolation: Hash-based isolation prevents cross-contamination
- Universal API: Works with OpenAI, Google, local models, and proxies
- Intelligent Ensembles: Weighted combinations with sophisticated fallback
- Test-Time Compute: Enhanced reasoning through proxy systems (see OptiLLM setup)
- Plugin Ecosystem: Support for advanced reasoning plugins
- Double Selection: Different programs for performance vs inspiration
- Adaptive Feature Dimensions: Custom quality-diversity metrics
- Migration Patterns: Ring topology with controlled gene flow
- Multi-Strategy Sampling: Elite, diverse, and exploratory selection
Perfect For
| Use Case | Why OpenEvolve Excels | |--------------|---------------------------| | Performance Optimization | Discovers hardware-specific optimizations humans miss | | Algorithm Discovery | Finds novel approaches to classic problems | | Scientific Computing | Automates tedious manual tuning processes | | Competitive Programming | Generates multiple solution strategies | | Multi-Objective Problems | Pareto-optimal solutions across dimensions |
🛠 Installation & Setup
Requirements
- Python: 3.10+
- LLM Access: Any OpenAI-compatible API
- Optional: Docker for containerized runs
Installation Options
<details> <summary><b>📦 PyPI (Recommended)</b></summary>pip install openevolve
</details>
<details>
<summary><b>🔧 Development Install</b></summary>
git clone https://github.com/algorithmicsuperintelligence/openevolve.git
cd openevolve
pip install -e ".[dev]"
</details>
<details>
<summary><b>🐳 Docker</b></summary>
# Pull the image
docker pull ghcr.io/algorithmicsuperintelligence/openevolve:latest
# Run an example
docker run --rm -v $(pwd):/app ghcr.io/algorithmicsuperintelligence/openevolve:latest \
examples/function_minimization/initial_program.py \
examples/function_minimization/evaluator.py --iterations 100
</details>
Cost Estimation
Cost depends on your LLM provider and iterations:
- o3: ~$0.15-0.60 per iteration (depending on code size)
- o3-mini: ~$0.03-0.12 per iteration (more cost-effective)
- Gemini-2.5-Pro: ~$0.08-0.30 per iteration
- Gemini-2.5-Flash: ~$0.01-0.05 per iteration (fastest and cheapest)
- Local models: Nearly free after setup
- OptiLLM: Use cheaper models with test-time compute for better results
Cost-saving tips:
- Start with fewer iterations (100-200)
- Use o3-mini, Gemini-2.5-Flash or local models for exploration
- Use cascade evaluation to filter bad programs early
- Configure smaller population sizes initially
LLM Provider Setup
OpenEvolve works with any OpenAI-compatible API:
<details> <summary><b>🔥 OpenAI (Direct)</b></summary>export OPENAI_API_KEY="sk-..."
# Uses OpenAI endpoints by default
</details>
<details>
<summary><b>🤖 Google Gemini</b></summary>
# config.yaml
ll
