Parlant
The conversational control layer for customer-facing AI agents - Parlant is a context-engineering framework optimized for controlling customer interactions.
Install / Use
/learn @emcie-co/ParlantREADME
The conversational control layer for customer-facing AI agents
<p> <a href="https://pypi.org/project/parlant/"><img alt="PyPI" src="https://img.shields.io/pypi/v/parlant?color=blue"></a> <img alt="Python 3.10+" src="https://img.shields.io/badge/python-3.10+-blue"> <a href="https://opensource.org/licenses/Apache-2.0"><img alt="License" src="https://img.shields.io/badge/license-Apache%202.0-green"></a> <a href="https://discord.gg/duxWqxKk6J"><img alt="Discord" src="https://img.shields.io/discord/1312378700993663007?color=7289da&logo=discord&logoColor=white"></a> <img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/emcie-co/parlant?style=social"> </p> <p> <a href="https://www.parlant.io/" target="_blank">Website</a> • <a href="https://www.parlant.io/docs/quickstart/installation" target="_blank">Quick Start</a> • <a href="https://www.parlant.io/docs/quickstart/examples" target="_blank">Examples</a> • <a href="https://discord.gg/duxWqxKk6J" target="_blank">Discord</a> </p> <p> <a href="https://zdoc.app/de/emcie-co/parlant">Deutsch</a> | <a href="https://zdoc.app/es/emcie-co/parlant">Español</a> | <a href="https://zdoc.app/fr/emcie-co/parlant">français</a> | <a href="https://zdoc.app/ja/emcie-co/parlant">日本語</a> | <a href="https://zdoc.app/ko/emcie-co/parlant">한국어</a> | <a href="https://zdoc.app/pt/emcie-co/parlant">Português</a> | <a href="https://zdoc.app/ru/emcie-co/parlant">Русский</a> | <a href="https://zdoc.app/zh/emcie-co/parlant">中文</a> </p> <a href="https://trendshift.io/repositories/12768" target="_blank"> <img src="https://trendshift.io/api/badge/repositories/12768" alt="Trending" style="width: 250px; height: 55px;" width="250" height="55"/> </a> </div>
Parlant streamlines conversational context engineering for enterprise-grade B2C (business to consumer) and sensitive B2B interactions that need to be consistent, compliant, and on-brand.
Why Parlant?
Conversational context engineering is hard because real-world interactions are diverse, nuanced, and non-linear.
❌ The Problem: What you've probably tried and couldn't get to work at scale
System prompts work until production complexity kicks in. The more instructions you add to a prompt, the faster your agent stops paying attention to any of them.
Routed graphs solve the prompt-overload problem, but the more routing you add, the more fragile it becomes when faced with the chaos of natural interactions.
🔑 The Solution: Context engineering, optimized for conversational control
Parlant solves this with context engineering: getting the right context, no more and no less, into the prompt at the right time. You define your rules, knowledge, and tools once; the engine narrows the context in real-time to what's immediately relevant to the current turn.
<img alt="Parlant Demo" src="https://github.com/emcie-co/parlant/blob/develop/docs/demo.gif?raw=true" width="100%" />Getting started
pip install parlant
import parlant.sdk as p
async with p.Server():
agent = await server.create_agent(
name="Customer Support",
description="Handles customer inquiries for an airline",
)
# Evaluate and call tools only under the right conditions
expert_customer = await agent.create_observation(
condition="customer uses financial terminology like DTI or amortization",
tools=[research_deep_answer],
)
# When the expert observation holds, always respond
# with depth. Set the guideline to automatically match
# whenever the observation it depends on holds...
expert_answers = await agent.create_guideline(
matcher=p.MATCH_ALWAYS,
action="respond with technical depth",
dependencies=[expert_customer],
)
beginner_answers = await agent.create_guideline(
condition="customer seems new to the topic",
action="simplify and use concrete examples",
)
# When both match, beginners wins. Neither expert-level
# tool-data nor instructions can enter the agent's context.
await beginner_answers.exclude(expert_customer)
Follow the 5-minute quickstart for a full walkthrough.
Parlant at a glance
You define your agent's behavior in code (not prompts), and the engine dynamically narrows the context on each turn to only what's immediately relevant, so the LLM stays focused and your agent stays aligned.
graph TD
O[Observations] -->|Events| E[Contextual Matching Engine]
G[Guidelines] -->|Instructions| E
J["Journeys (SOPs)"] -->|Current Steps| E
R[Retrievers] -->|Domain Knowledge| E
GL[Glossary] -->|Domain Terms| E
V[Variables] -->|Memories| E
E -->|Tool Requests| T[Tool Caller]
T -.->|Results + Optional Extra Matching Iterations| E
T -->|**Key Result:**<br/>Focused Context Window| M[Message Generation]
Instead of sending a large system prompt followed by a raw conversation to the model, Parlant first assembles a focused context — matching only the instructions and tools relevant to each conversational turn — then generates a response from that narrowed context.
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#e8f5e9', 'primaryTextColor': '#1b5e20', 'primaryBorderColor': '#81c784', 'lineColor': '#66bb6a', 'secondaryColor': '#fff9e1', 'tertiaryColor': 'transparent'}}}%%
flowchart LR
A(User):::outputNode
subgraph Engine["Parlant Engine"]
direction LR
B["Match Guidelines and Resolve Journey States"]:::matchNode
C["Call Contextually-Associated Tools and Workflows"]:::toolNode
D["Generated Message"]:::composeNode
E["Canned Message"]:::cannedNode
end
A a@-->|💬 User Input| B
B b@--> C
C c@-->|Fluid Output Mode?| D
C d@-->|Strict Output Mode?| E
D e@-->|💬 Fluid Output| A
E f@-->|💬 Canned Output| A
a@{animate: true}
b@{animate: true}
c@{animate: true}
d@{animate: true}
e@{animate: true}
f@{animate: true}
linkStyle 2 stroke-width:2px
linkStyle 4 stroke-width:2px
linkStyle 3 stroke-width:2px,stroke:#3949AB
linkStyle 5 stroke-width:2px,stroke:#3949AB
classDef composeNode fill:#F9E9CB,stroke:#AB8139,stroke-width:2px,color:#7E5E1A,stroke-width:0
classDef cannedNode fill:#DFE3F9,stroke:#3949AB,stroke-width:2px,color:#1a237e,stroke-width:0
In this way, adding more rules makes the agent smarter, not more confused — because the engine filters context relevance, not the LLM.
Is Parlant for you?
Parlant is built for teams that need their AI agent to behave reliably in front of real customers. It's a good fit if:
- You're building a customer-facing agent — support, sales, onboarding, advisory — where tone, accuracy, and compliance matter.
- You have dozens or hundreds of behavioral rules and your system prompt is buckling under the weight.
- You're in a regulated or high-stakes domain (finance, insurance, healthcare, telecom) where every response needs to be explainable and auditable.
Parlant is deployed in production at the most stringent organizations, including banks.
Parlant isn't just a framework. It's a high-level software that solves the conversational modeling problem head-on. — Sarthak Dalabehera, Principal Engineer, Slice Bank
By far the most elegant conversational AI framework that I've come across. — Vishal Ahuja, Senior Lead, Applied AI, JPMorgan Chase
Parlant dramatically reduces the need for prompt engineering and complex flow control. Building agents becomes closer to domain modeling. — Diogo Santiago, AI Engineer, Orcale
Features
-
Guidelines — Behavioral rules as condition-action pairs; the engine matches only what's relevant per turn.
-
Relationships — Dependencies and exclusions between guidelines to keep the context narrow and focused.
-
Journeys — Multi-turn SOPs that adapt to how the customer actually interacts.
-
Canned Responses — Pre-approved response templates that eliminate hallucination at critical moments.
-
Tools — External APIs and workflows, triggered only when their observation matches.
-
Glossary — Domain-specific vocabulary so the agent understands customer language.
-
Explainability — Full OpenTelemetry tracing — every guideline match and decision is logged.
Guidelines
Behavioral rules as condition-action pairs: when the condition applies, the action kicks into context.
Instead of cramming all guidelines in a single prompt, the engine evaluates which ones apply on each conversational turn and only includes the relevant ones in the LLM's context.
This lets you define hundreds of guidelines without degrading adherence.
await agent.create_guideline(
condition="customer uses financial terminology like DTI or amortization",
action="respond with technical depth — skip basic explanations",
)
Relationships
Relationships
