Lar
Lár the Pytorch for Agents is the open-source "glass box" engine for building, debugging, and running auditable, self-correcting AI agents.
Install / Use
/learn @snath-ai/LarQuality Score
Category
Development & EngineeringSupported Platforms
README
Lár: The PyTorch for Agents
Lár (Irish for "core" or "center") is the open source standard for Deterministic, Auditable, and Air-Gap Capable AI agents.
It is a "define-by-run" framework that acts as a Flight Recorder for your agent, creating a complete audit trail for every single step.
[!NOTE] Lár is NOT a wrapper. It is a standalone, ground-up engine designed for reliability. It does not wrap LangChain, OpenAI Swarm, or any other library. It is pure, dependency-lite Python code optimized for "Code-as-Graph" execution.
The "Black Box" Problem
You are a developer launching a mission-critical AI agent. It works on your machine, but in production, it fails. You don't know why, where, or how much it cost. You just get a 100-line stack trace from a "magic" framework.
The "Glass Box" Solution
Lár removes the magic.
It is a simple engine that runs one node at a time, logging every single step to a forensic Flight Recorder.
This means you get:
- Instant Debugging: See the exact node and error that caused the crash.
- Free Auditing: A complete history of every decision and token cost, built-in by default.
- Total Control: Build deterministic "assembly lines," not chaotic chat rooms.
"This demonstrates that for a graph without randomness or external model variability, Lár executes deterministically and produces identical state traces."
Stop guessing. Start building agents you can trust.
Why Lár is Better: The "Glass Box" Advantage
| Feature | The "Black Box" (LangChain / CrewAI) | The "Glass Box" (Lár) |
| :--- | :--- | :--- |
| Debugging | A Nightmare. When an agent fails, you get a 100-line stack trace from inside the framework's "magic" AgentExecutor. You have to guess what went wrong. | Instant & Precise. Your history log is the debugger. You see the exact node that failed (e.g., ToolNode), You see the exact error (APIConnectionError), and the exact state that caused it. |
| Auditability | External & Paid. "What happened?" is a mystery. You need an external, paid tool like LangSmith to add a "flight recorder" to your "black box." | Built-in & Free. The "Flight Log" (history log) is the core, default, open-source output of the GraphExecutor. You built this from day one. |
| Multi-Agent Collaboration | Chaotic "Chat Room." Agents are put in a room to "talk" to each other. It's "magic," but it's uncontrollable. You can't be sure who will talk next or if they'll get stuck in a loop. | Deterministic "Assembly Line." You are the architect. You define the exact path of collaboration using RouterNode and ToolNode. |
| Deterministic Control | None. You can't guarantee execution order. The "Tweeter" agent might run before the "Researcher" agent is finished. | Full Control. The "Tweeter" (LLMNode) cannot run until the "RAG Agent" (ToolNode) has successfully finished and saved its result to the state. |
| Data Flow | Implicit & Messy. Agents pass data by "chatting." The ToolNode's output might be polluted by another agent's "thoughts." | Explicit & Hard-Coded. The data flow is defined by you: RAG Output -> Tweet Input. The "Tweeter" only sees the data it's supposed to. |
| Resilience & Cost | Wasteful & Brittle. If the RAG agent fails, the Tweeter agent might still run with no data, wasting API calls and money. A loop of 5 agents all chatting can hit rate limits fast. | Efficient & Resilient. If the RAG agent fails, the Tweeter never runs. Your graph stops, saving you money and preventing a bad output. Your LLMNode's built-in retry handles transient errors silently. |
| Core Philosophy | Sells "Magic." | Sells "Trust." |
Universal Model Support: Powered by LiteLLM
Lár runs on 100+ Providers. Because Lár is built on the robust LiteLLM adapter, you are not locked into one vendor.
Start with OpenAI for prototyping. Deploy with Azure/Bedrock for compliance. Switch to Ollama for local privacy. All with Zero Refactoring.
| Task | LangChain / CrewAI | Lár (The Unified Way) |
| :--- | :--- | :--- |
| Switching Providers | 1. Import new provider class.<br>2. Instantiate specific object.<br>3. Refactor logic. | Change 1 string.<br>model="gpt-4o" → model="ollama/phi4" |
| Code Changes | High. ChatOpenAI vs ChatBedrock classes. | Zero. The API contract is identical for every model. |
Read the Full LiteLLM Setup Guide to learn how to configure:
- Local Models (Ollama, Llama.cpp, LocalAI)
- Cloud Providers (OpenAI, Anthropic, Vertex, Bedrock, Azure)
- Advanced Config (Temperature, API Base, Custom Headers)
# Want to save money? Switch to local.
# No imports to change. No logic to refactor.
# Before (Cloud)
node = LLMNode(model_name="gpt-4o", ...)
# After (Local - Ollama)
node = LLMNode(model_name="ollama/phi4", ...)
# After (Local - Generic Server)
node = LLMNode(
model_name="openai/custom",
generation_config={"api_base": "http://localhost:8080/v1"}
)
Quick Start (v1.4.0)
The fastest way to build an agent is the CLI.
1. Install & Scaffold
pip install lar-engine
lar new agent my-bot
cd my-bot
poetry install # or pip install -e .
python agent.py
This generates a production-ready folder structure with
pyproject.toml,.env, and a template agent. (For Lár v1.4.0+)
2. The "Low Code" Way (@node)
Define nodes as simple functions. No boilerplate.
from lar import node
@node(output_key="summary")
def summarize_text(state):
# Access state like a dictionary (New in v1.4.0!)
text = state["text"]
return llm.generate(text)
(See examples/v1_4_showcase.py for a full comparison)
The Game Changer: Hybrid Cognitive Architecture
Most frameworks are "All LLM." This doesn't scale. You cannot run 1,000 agents if every step costs $0.05 and takes 3 seconds.
1. The "Construction Site" Metaphor
-
The Old Way (Standard Agents): Imagine a construction site where every single worker is a high-paid Architect. To hammer a nail, they stop, "think" about the nail, write a poem about the nail, and charge you $5. It takes forever and costs a fortune.
-
The Lár Way (Hybrid Swarm): Imagine One Architect and 1,000 Robots.
- The Architect (Orchestrator Node): Looks at the blueprint ONCE. Yells: "Build the Skyscraper!"
- The Robots (Swarm): They hear the order. They don't "think." They don't charge $5. They just execute thousands of steps instantly.
2. The Numbers Don't Lie
We prove this in examples/scale/1_corporate_swarm.py.
| Feature | Standard "Agent Builder" (LangChain/CrewAI) | Lár "Hybrid" Architecture | | :--- | :--- | :--- | | Logic | 100% LLM Nodes. Every step is a prompt. | 1% LLM (Orchestrator) + 99% Code (Swarm) | | Cost | $$$ (60 LLM calls). | $ (1 LLM call). | | Speed | Slow (60s+ latency). | Instant (0.08s for 64 steps). | | Reliability | Low. "Telephone Game" effect. | High. Deterministic execution. |
3. Case Study: The "Smoking Gun" Proof
We built the generic "Corporate Swarm" in massive-scale LangChain/LangGraph (examples/comparisons/langchain_swarm_fail.py) to compare.
It crashed at Step 25.
-> Step 24
CRASH CONFIRMED: Recursion limit of 25 reached without hitting a stop condition.
LangGraph Engine stopped execution due to Recursion Limit.
Why this matters:
- The "Recursion Limit" Crash: Standard executors treat agents as loops. They cap at 25 steps to prevent infinite loops. Real work (like a 60-step swarm) triggers this safety switch.
- Clone the Patterns: You don't need a framework. You need a pattern. We provide 21 single-file recipes (Examples 1-21).
- The "Token Burn": Standard frameworks use an LLM to route every step ($0.60/run). Lár uses code ($0.00/run).
- The "Telephone Game": Passing data through 60 LLM layers corrupts context. Lár passes explicit state objects.
"Lár turns Agents from 'Chatbot Prototyping' into 'High-Performance Software'."
A Simple Self-Correcting Loop
graph TD
A[Start] --> B[Step 0: PlannerNode - Writer]
B --> C1[Step 1: ToolNode - Tester]
C1 --> D{Step 2: RouteNode - Judge}
%% Success path
subgraph Success_Path
direction TB
G[Step 5: AddValueNode - Finalize]
end
%% Correction loop
subgraph Correction_Loop
direction TB
E[Step 3: LLMNode - Corrector]
F[Step 4: ClearErrorNode - Cleanup]
end
D -- Success --> G
D -- Failure --> E
E --> F
F --> C1
G --> H[End]
classDef default stroke:#8FA3B0, color:#FFFFFF, fill:#1E293B;
classDef decision stroke:#8FA3B0, color:#FFFFFF, fill:#1E293B;
classDef startend stroke:#8FA3B0, color:#FFFFFF, fill:#1E293B;
class A,H startend;
class B,C1,E,F,G default;
class D decision;
The Lár Architecture: Co
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
async-pr-review
99.7kTrigger this skill when the user wants to start an asynchronous PR review, run background checks on a PR, or check the status of a previously started async PR review.
ci
99.7kCI Replicate & Status This skill enables the agent to efficiently monitor GitHub Actions, triage failures, and bridge remote CI errors to local development. It defaults to automatic replication
