Lmnr
Laminar - open-source observability platform purpose-built for AI agents. YC S24.
Install / Use
/learn @lmnr-ai/LmnrREADME
<a href="https://www.ycombinator.com/companies/laminar-ai"></a>
<a href="https://x.com/lmnrai">
</a>
<a href="https://discord.gg/nNFUUDAKub">
</a>

Laminar
Laminar is an open-source observability platform purpose-built for AI agents.
- [x] Tracing. Docs
- [x] OpenTelemetry-native powerful tracing SDK - 1 line of code to automatically trace Vercel AI SDK, Browser Use, Stagehand, LangChain, OpenAI, Anthropic, Gemini, and more.
- [x] Evals. Docs
- [x] Unopinionated, extensible SDK and CLI for running evals locally or in CI/CD pipeline.
- [x] UI for visualizing evals and comparing results.
- [x] AI monitoring. Docs
- [x] Define events with natural language descriptions to track issues, logical errors, and custom behavior of your agent.
- [x] SQL access to all data. Docs
- [x] Query traces, metrics, and events with a built-in SQL editor. Bulk create datasets from queries. Available via API.
- [x] Dashboards. Docs
- [x] Powerful dashboard builder for traces, metrics, and events with support of custom SQL queries.
- [x] Data annotation & Datasets. Docs
- [x] Custom data rendering UI for fast data annotation and dataset creation for evals.
- [x] Extremely high performance.
- [x] Written in Rust 🦀
- [x] Custom realtime engine for viewing traces as they happen.
- [x] Ultra-fast full-text search over span data.
- [x] gRPC exporter for tracing data.

Documentation
Check out full documentation here docs.laminar.sh.
Getting started
The fastest and easiest way to get started is with our managed platform -> laminar.sh
Self-hosting with Docker compose
Laminar is very easy to self-host locally. For a quick start, clone the repo and start the services with docker compose:
git clone https://github.com/lmnr-ai/lmnr
cd lmnr
docker compose up -d
This will spin up a lightweight but full-featured version of the stack. This is good for a quickstart or for lightweight usage. You can access the UI at http://localhost:5667 in your browser.
You will also need to properly configure the SDK, with baseUrl and correct ports. See guide on self-hosting.
For production environment, we recommend using our managed platform or docker compose -f docker-compose-full.yml up -d.
Enabling the Signals feature
To enable Signals / AI monitoring in self-hosted mode, set the GOOGLE_GENERATIVE_AI_API_KEY environment variable in your .env file. This key is required by both the app-server and the frontend.
# In .env at the repo root
GOOGLE_GENERATIVE_AI_API_KEY=your_key_here
Contributing
For running and building Laminar locally, or to learn more about docker compose files, follow the guide in Contributing.
TS quickstart
First, create a project and generate a project API key. Then,
npm add @lmnr-ai/lmnr
It will install Laminar TS SDK and all instrumentation packages (OpenAI, Anthropic, LangChain ...)
To start tracing LLM calls just add
import { Laminar } from '@lmnr-ai/lmnr';
Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY });
To trace inputs / outputs of functions use observe wrapper.
import { OpenAI } from 'openai';
import { observe } from '@lmnr-ai/lmnr';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const poemWriter = observe({name: 'poemWriter'}, async (topic) => {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: `write a poem about ${topic}` }],
});
return response.choices[0].message.content;
});
await poemWriter();
Python quickstart
First, create a project and generate a project API key. Then,
pip install --upgrade 'lmnr[all]'
It will install Laminar Python SDK and all instrumentation packages. See list of all instruments here
To start tracing LLM calls just add
from lmnr import Laminar
Laminar.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")
To trace inputs / outputs of functions use @observe() decorator.
import os
from openai import OpenAI
from lmnr import observe, Laminar
Laminar.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@observe() # annotate all functions you want to trace
def poem_writer(topic):
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": f"write a poem about {topic}"},
],
)
poem = response.choices[0].message.content
return poem
if __name__ == "__main__":
print(poem_writer(topic="laminar flow"))
Client libraries
To learn more about instrumenting your code, check out our client libraries:
<a href="https://www.npmjs.com/package/@lmnr-ai/lmnr"> </a>
<a href="https://pypi.org/project/lmnr/">
</a>
Related Skills
himalaya
347.6kCLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language).
taskflow
347.6kname: taskflow description: Use when work should span one or more detached tasks but still behave like one job with a single owner context. TaskFlow is the durable flow substrate under authoring layer
tmux
347.6kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
Writing Hookify Rules
108.4kThis skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
