AdalFlow
AdalFlow: The library to build & auto-optimize LLM applications.
Install / Use
/learn @SylphAI-Inc/AdalFlowREADME
Why AdalFlow
- 100% Open-source Agents SDK: Lightweight and requires no additional API to setup
Human-in-the-LoopandTracingFunctionalities. - Say goodbye to manual prompting: AdalFlow provides a unified auto-differentiative framework for both zero-shot optimization and few-shot prompt optimization. Our research,
LLM-AutoDiffandLearn-to-Reason Few-shot In Context Learning, achieve the highest accuracy among all auto-prompt optimization libraries. - Switch your LLM app to any model via a config: AdalFlow provides
Model-agnosticbuilding blocks for LLM task pipelines, ranging from RAG, Agents to classical NLP tasks.
View Documentation
Quick Start
Install AdalFlow with pip:
pip install adalflow
Hello World Agent Example
from adalflow import Agent, Runner
from adalflow.components.model_client.openai_client import OpenAIClient
from adalflow.core.types import (
ToolCallActivityRunItem,
RunItemStreamEvent,
ToolCallRunItem,
ToolOutputRunItem,
FinalOutputItem
)
import asyncio
# Define tools
def calculator(expression: str) -> str:
"""Evaluate a mathematical expression."""
try:
result = eval(expression)
return f"The result of {expression} is {result}"
except Exception as e:
return f"Error: {e}"
async def web_search(query: str="what is the weather in SF today?") -> str:
"""Web search on query."""
await asyncio.sleep(0.5)
return "San Francisco will be mostly cloudy today with some afternoon sun, reaching about 67 °F (20 °C)."
def counter(limit: int):
"""A counter that counts up to a limit."""
final_output = []
for i in range(1, limit + 1):
stream_item = f"Count: {i}/{limit}"
final_output.append(stream_item)
yield ToolCallActivityRunItem(data=stream_item)
yield final_output
# Create agent with tools
agent = Agent(
name="MyAgent",
tools=[calculator, web_search, counter],
model_client=OpenAIClient(),
model_kwargs={"model": "gpt-4o", "temperature": 0.3},
max_steps=5
)
runner = Runner(agent=agent)
1. Synchronous Call Mode
# Sync call - returns RunnerResult with complete execution history
result = runner.call(
prompt_kwargs={"input_str": "Calculate 15 * 7 + 23 and count to 5"}
)
print(result.answer)
# Output: The result of 15 * 7 + 23 is 128. The counter counted up to 5: 1, 2, 3, 4, 5.
# Access step history
for step in result.step_history:
print(f"Step {step.step}: {step.function.name} -> {step.observation}")
# Output:
# Step 0: calculator -> The result of 15 * 7 + 23 is 128
# Step 1: counter -> ['Count: 1/5', 'Count: 2/5', 'Count: 3/5', 'Count: 4/5', 'Count: 5/5']
2. Asynchronous Call Mode
# Async call - similar output structure to sync call
result = await runner.acall(
prompt_kwargs={"input_str": "What's the weather in SF and calculate 42 * 3"}
)
print(result.answer)
# Output: San Francisco will be mostly cloudy today with some afternoon sun, reaching about 67 °F (20 °C).
# The result of 42 * 3 is 126.
3. Async Streaming Mode
# Async streaming - real-time event processing
streaming_result = runner.astream(
prompt_kwargs={"input_str": "Calculate 100 + 50 and count to 3"},
)
# Process streaming events in real-time
async for event in streaming_result.stream_events():
if isinstance(event, RunItemStreamEvent):
if isinstance(event.item, ToolCallRunItem):
print(f"🔧 Calling: {event.item.data.name}")
elif isinstance(event.item, ToolCallActivityRunItem):
print(f"📝 Activity: {event.item.data}")
elif isinstance(event.item, ToolOutputRunItem):
print(f"✅ Output: {event.item.data.output}")
elif isinstance(event.item, FinalOutputItem):
print(f"🎯 Final: {event.item.data.answer}")
# Output:
# 🔧 Calling: calculator
# ✅ Output: The result of 100 + 50 is 150
# 🔧 Calling: counter
# 📝 Activity: Count: 1/3
# 📝 Activity: Count: 2/3
# 📝 Activity: Count: 3/3
# ✅ Output: ['Count: 1/3', 'Count: 2/3', 'Count: 3/3']
# 🎯 Final: The result of 100 + 50 is 150. Counted to 3 successfully.
Set your OPENAI_API_KEY environment variable to run these examples.
Try the full Agent tutorial in Colab:
View Quickstart: Learn How AdalFlow optimizes LM workflows end-to-end in 15 mins.
Go to Documentation for tracing, human-in-the-loop, and more.
<!-- * Try the [Building Quickstart](https://colab.research.google.com/drive/1TKw_JHE42Z_AWo8UuRYZCO2iuMgyslTZ?usp=sharing) in Colab to see how AdalFlow can build the task pipeline, including Chatbot, RAG, agent, and structured output. * Try the [Optimization Quickstart](https://colab.research.google.com/github/SylphAI-Inc/AdalFlow/blob/main/notebooks/qas/adalflow_object_count_auto_optimization.ipynb) to see how AdalFlow can optimize the taskRelated Skills
claude-opus-4-5-migration
83.2kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
337.3kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
openhue
337.3kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
337.3kElevenLabs text-to-speech with mac-style say UX.
