Langroid
Harness LLMs with Multi-Agent Programming
Install / Use
/learn @langroid/LangroidREADME
Langroid is an intuitive, lightweight, extensible and principled
Python framework to easily build LLM-powered applications, from CMU and UW-Madison researchers.
You set up Agents, equip them with optional components (LLM,
vector-store and tools/functions), assign them tasks, and have them
collaboratively solve a problem by exchanging messages.
This Multi-Agent paradigm is inspired by the
Actor Framework
(but you do not need to know anything about this!).
Langroid is a fresh take on LLM app-development, where considerable thought has gone
into simplifying the developer experience;
it does not use Langchain, or any other LLM framework,
and works with practically any LLM.
🔥 ✨ A Claude Code plugin is available to accelerate Langroid development with built-in patterns and best practices.
🔥 Read the (WIP) overview of the langroid architecture, and a quick tour of Langroid.
🔥 MCP Support: Allow any LLM-Agent to leverage MCP Servers via Langroid's simple
MCP tool adapter that converts
the server's tools into Langroid's ToolMessage instances.
📢 Companies are using/adapting Langroid in production. Here is a quote:
Nullify uses AI Agents for secure software development. It finds, prioritizes and fixes vulnerabilities. We have internally adapted Langroid's multi-agent orchestration framework in production, after evaluating CrewAI, Autogen, LangChain, Langflow, etc. We found Langroid to be far superior to those frameworks in terms of ease of setup and flexibility. Langroid's Agent and Task abstractions are intuitive, well thought out, and provide a great developer experience. We wanted the quickest way to get something in production. With other frameworks it would have taken us weeks, but with Langroid we got to good results in minutes. Highly recommended! <br> -- Jacky Wong, Head of AI at Nullify.
🔥 See this Intro to Langroid blog post from the LanceDB team
🔥 Just published in ML for Healthcare (2024): a Langroid-based Multi-Agent RAG system for pharmacovigilance, see blog post
We welcome contributions: See the contributions document for ideas on what to contribute.
Are you building LLM Applications, or want help with Langroid for your company, or want to prioritize Langroid features for your company use-cases? Prasad Chalasani is available for consulting (advisory/development): pchalasani at gmail dot com.
Sponsorship is also accepted via GitHub Sponsors
Questions, Feedback, Ideas? Join us on Discord!
Quick glimpse of coding with Langroid
This is just a teaser; there's much more, like function-calling/tools, Multi-Agent Collaboration, Structured Information Extraction, DocChatAgent (RAG), SQLChatAgent, non-OpenAI local/remote LLMs, etc. Scroll down or see docs for more. See the Langroid Quick-Start Colab that builds up to a 2-agent information-extraction example using the OpenAI ChatCompletion API. See also this version that uses the OpenAI Assistants API instead.
🔥 just released! Example script showing how you can use Langroid multi-agents and tools to extract structured information from a document using only a local LLM (Mistral-7b-instruct-v0.2).
import langroid as lr
import langroid.language_models as lm
# set up LLM
llm_cfg = lm.OpenAIGPTConfig( # or OpenAIAssistant to use Assistant API
# any model served via an OpenAI-compatible API
chat_model=lm.OpenAIChatModel.GPT4o, # or, e.g., "ollama/mistral"
)
# use LLM directly
mdl = lm.OpenAIGPT(llm_cfg)
response = mdl.chat("What is the capital of Ontario?", max_tokens=10)
# use LLM in an Agent
agent_cfg = lr.ChatAgentConfig(llm=llm_cfg)
agent = lr.ChatAgent(agent_cfg)
agent.llm_response("What is the capital of China?")
response = agent.llm_response("And India?") # maintains conversation state
# wrap Agent in a Task to run interactive loop with user (or other agents)
task = lr.Task(agent, name="Bot", system_message="You are a helpful assistant")
task.run("Hello") # kick off with user saying "Hello"
# 2-Agent chat loop: Teacher Agent asks questions to Student Agent
teacher_agent = lr.ChatAgent(agent_cfg)
teacher_task = lr.Task(
teacher_agent, name="Teacher",
system_message="""
Ask your student concise numbers questions, and give feedback.
Start with a question.
"""
)
student_agent = lr.ChatAgent(agent_cfg)
student_task = lr.Task(
student_agent, name="Student",
system_message="Concisely answer the teacher's questions.",
single_round=True,
)
teacher_task.add_sub_task(student_task)
teacher_task.run()
🔥 Updates/Releases
<details> <summary> <b>Click to expand</b></summary>- Aug 2025:
- 0.59.0 Complete Pydantic V2 Migration - 5-50x faster validation, modern Python patterns, 100% backward compatible.
- Jul 2025:
- Jun 2025:
- 0.56.0
TaskToolfor delegating tasks to sub-agents - enables agents to spawn sub-agents with specific tools and configurations. - 0.55.0 Event-based task termination with
done_sequences- declarative task completion using event patterns. - 0.54.0 Portkey AI Gateway support - access 200+ models across providers through unified API with caching, retries, observability.
- 0.56.0
- Mar-Apr 2025:
- 0.53.0 MCP Tools Support.
- 0.52.0 Multimodal support, i.e. allow PDF, image inputs to LLM.
- 0.51.0
LLMPdfParser, generalizingGeminiPdfParserto parse documents directly with LLM. - 0.50.0 Structure-aware Markdown chunking with chunks enriched by section headers.
- 0.49.0 Enable easy switch to LiteLLM Proxy-server
- 0.48.0 Exa Crawler, Markitdown Parser
- 0.47.0 Support Firecrawl URL scraper/crawler - thanks @abab-dev
