Honcho
Memory library for building stateful agents
Install / Use
/learn @plastic-labs/HonchoQuality Score
Category
Data & AnalyticsSupported Platforms
README
Honcho is an open source memory library with a managed service for building stateful agents. Use it with any model, framework, or architecture. It enables agents to build and maintain state about any entity--users, agents, groups, ideas, and more. And because it's a continual learning system, it understands entities that change over time. Using Honcho as your memory system will earn your agents higher retention, more trust, and help you build data moats to out-compete incumbents.
Honcho has defined the Pareto Frontier of Agent Memory. Watch the video, check out our evals page, and read the blog post for more detail.
TL;DR - Getting Started
With Honcho you can easily setup your application's workflow, save your interaction history, and leverage the reasoning it does to inform the behavior of your agents
Typescript examples are available in our docs.
- Install the SDK
# Python
pip install honcho-ai
uv add honcho-ai
poetry add honcho-ai
- Setup your
Workspace,Peers,Session, and sendMessages
from honcho import Honcho
# 1. Initialize your Honcho client
honcho = Honcho(workspace_id="my-app-testing")
# 2. Initialize peers
alice = honcho.peer("alice")
tutor = honcho.peer("tutor")
# 3. Create a session and add messages
session = honcho.session("session-1")
# Adding messages from a peer will automatically add them to the session
session.add_messages(
[
alice.message("Hey there — can you help me with my math homework?"),
tutor.message("Absolutely. Send me your first problem!"),
]
)
- Leverage reasoning from Honcho to inform your agent's behavior
### 1. Use the chat endpoint to ask questions about your users in natural language
response = alice.chat("What learning styles does the user respond to best?")
### 2. Use session context to continue a conversation with an LLM
context = session.context(summary=True, tokens=10_000)
# Convert to a format to send to OpenAI and get the next message
openai_messages = context.to_openai(assistant=tutor)
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=openai_messages
)
### 3. Search for similar messages
results = alice.search("Math Homework")
### 4. Get a session-scoped representation of a peer
alice_representation = session.representation(alice)
This is a simple example of how you can use Honcho to build a chatbot and leverage insights to personalize the agent's behavior.
Sign up at app.honcho.dev to get started with a managed version of Honcho.
Learn more ways to use Honcho on our developer docs.
Read about the design philosophy and history of the project on our blog.
Project Structure
The Honcho project is split between several repositories with this one hosting the core service logic. This is implemented as a FastAPI server/API to store data about an application's state.
There are also client SDKs implemented in the sdks/ directory with support
for Python and TypeScript.
Examples on how to use the SDK are located within each SDK folder and in the SDK Reference
There are also documented examples of how to use the core SDKs in the API Reference section of the documentation.
Usage
Sign up for an account at https://app.honcho.dev and get started with $100 free credits. When you sign up you'll be prompted to join an organization which will have a dedicated instance of Honcho.
Provision API keys and change your base url to point to https://api.honcho.dev
Additionally, Honcho can be self-hosted for testing and evaluation purposes. See the Local Development section below for details on how to set up a local version of Honcho.
Local Development
Below is a guide on setting up a local environment for running the Honcho Server.
This guide was made using a M3 Macbook Pro. For any compatibility issues on different platforms, please raise an Issue.
Prerequisites and Dependencies
Honcho is developed using python and uv.
The minimum python version is 3.9
The minimum uv version is 0.4.9
Setup
Once the dependencies are installed on the system run the following steps to get the local project setup.
- Clone the repository
git clone https://github.com/plastic-labs/honcho.git
- Enter the repository and install the python dependencies
We recommend using a virtual environment to isolate the dependencies for Honcho
from other projects on the same system. uv will create a virtual environment
when you sync your dependencies in the project.
cd honcho
uv sync
This will create a virtual environment and install the dependencies for Honcho.
The default virtual environment will be located at honcho/.venv. Activate the
virtual environment via:
source honcho/.venv/bin/activate
- Set up a database
Honcho utilizes Postgres for its database with pgvector. An easy way to get started with a postgres database is to create a project with Supabase
Alternatively, a docker-compose template is available with a sample database configuration.
To use Docker:
cp docker-compose.yml.example docker-compose.yml
docker compose up -d database
- Edit the environment variables
Honcho uses a .env file for managing runtime environment variables. A
.env.template file is included for convenience. Several of the configurations
are not required and are only necessary for additional logging, monitoring, and
security.
Below are the required configurations:
DB_CONNECTION_URI= # Connection uri for a postgres database (with postgresql+psycopg prefix)
# LLM Provider API Keys (at least one required depending on your configuration)
LLM_ANTHROPIC_API_KEY= # API Key for Anthropic (used for dialectic by default)
LLM_OPENAI_API_KEY= # API Key for OpenAI (optional, for embeddings if EMBED_MESSAGES=true)
LLM_GEMINI_API_KEY= # API Key for Google Gemini (used for summary/deriver by default)
LLM_GROQ_API_KEY= # API Key for Groq (used for query generation by default)
Note that the
DB_CONNECTION_URImust have the prefixpostgresql+psycopgto function properly. This is a requirement brought bysqlalchemy
The template has the additional functionality disabled by default. To ensure that they are disabled you can verify the following environment variables are set to false:
AUTH_USE_AUTH=false
SENTRY_ENABLED=false
If you set AUTH_USE_AUTH to true you will need to generate a JWT secret. You can
do this with the following command:
python scripts/generate_jwt_secret.py
This will generate a JWT secret and print it to the console. You can then set
the AUTH_JWT_SECRET environment variable. This is required for AUTH_USE_AUTH:
AUTH_JWT_SECRET=<generated_secret>
- Run database migrations
With the database set up and environment variables configured, run the migrations to create the necessary tables:
uv run alembic upgrade head
This will create all tables for Honcho including workspaces, peers, sessions, messages, and the queue system.
- Launch Honcho
With everything set up, you can now launch a local instance of Honcho. In addition to the database, two components need to be running:
Start the API server:
uv run fastapi dev src/main.py
This is a development server that will reload whenever code is changed.
Start a background worker (deriver):
In a separate terminal, run:
uv run python -m src.deriver
The deriver generates representation, summaries, peer cards, and manages dreaming tasks. You can increase the number of deriver's to improve runtime efficiency.
Pre-commit Hooks
Honcho uses pre-commit hooks to ensure code quality and consistency across the project. These hooks automatically run checks on your code before each commit, including linting, formatting, type checking, and security scans.
Installation
To set up pre-commit hooks in your development environment:
- **Install pre-commit
