Helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
Install / Use
/learn @Helicone/HeliconeREADME
| 🔍 Observability | 🕸️ Agent Tracing | 🚂 LLM Routing | | :--------------: | :--------------: | :------------------: | | 💰 Cost & Latency Tracking | 📚 Datasets & Fine-tuning | 🎛️ Automatic Fallbacks |
</div> <p align="center" style="margin: 0; padding: 0;"> <img alt="helicone logo" src="https://marketing-assets-helicone.s3.us-west-2.amazonaws.com/Twitter_Cover_A1.png" style="display: block; margin: 0; padding: 0;"> </p> </br> <p align="center"> <a href='https://github.com/helicone/helicone/graphs/contributors'><img src='https://img.shields.io/github/contributors/helicone/helicone?style=flat-square' alt='Contributors' /></a> <a href='https://github.com/helicone/helicone/stargazers'><img alt="GitHub stars" src="https://img.shields.io/github/stars/helicone/helicone?style=flat-square"/></a> <a href='https://github.com/helicone/helicone/pulse'><img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/helicone/helicone?style=flat-square"/></a> <a href='https://github.com/helicone/helicone/issues?q=is%3Aissue+is%3Aclosed'><img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/helicone/helicone?style=flat-square"/></a> <a href='https://www.ycombinator.com/companies/helicone'><img alt="Y Combinator" src="https://img.shields.io/badge/Y%20Combinator-Helicone-orange?style=flat-square"/></a> </p> <p align="center"> <a href="https://docs.helicone.ai/">Docs</a> • <a href="https://www.helicone.ai/changelog">Changelog</a> • <a href="https://github.com/helicone/helicone/issues">Bug reports</a> • <a href="https://helicone.ai/demo">See Helicone in Action! (Free)</a> </p>Helicone is an AI Gateway & LLM Observability Platform for AI Engineers
- 🌐 AI Gateway: Access 100+ AI models with 1 API key through the OpenAI API with intelligent routing and automatic fallbacks. Get started in 2 minutes.
- 🔌 Quick integration: One-line of code to log all your requests from OpenAI, Anthropic, LangChain, Gemini, Vercel AI SDK, and more.
- 📊 Observe: Inspect and debug traces & sessions for agents, chatbots, document processing pipelines, and more
- 📈 Analyze: Track metrics like cost, latency, quality, and more. Export to PostHog in one-line for custom dashboards
- 🎮 Playground: Rapidly test and iterate on prompts, sessions and traces in our UI.
- 🧠 Prompt Management: Version prompts using production data. Deploy prompts through the AI Gateway without code changes. Your prompts remain under your control, always accessible.
- 🎛️ Fine-tune: Fine-tune with one of our fine-tuning partners: OpenPipe or Autonomi (more coming soon)
- 🛡️ Enterprise Ready: SOC 2 and GDPR compliant
<img src="https://github.com/user-attachments/assets/e16332e9-d642-427e-b3ce-1a74a17f7b2c" alt="Open Sourced LLM Observability & AI Gateway Platform" width="600">🎁 Generous monthly free tier (10k requests/month) - No credit card required!
Quick Start ⚡️
-
Get your API key by signing up here and add credits at helicone.ai/credits
-
Update the
baseURLin your code and add your API key.import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://ai-gateway.helicone.ai", apiKey: process.env.HELICONE_API_KEY, }); const response = await client.chat.completions.create({ model: "gpt-4o-mini", // claude-sonnet-4, gemini-2.0-flash or any model from https://www.helicone.ai/models messages: [{ role: "user", content: "Hello!" }] }); -
🎉 You're all set! View your logs at Helicone and access 100+ models through one API.
Self-Hosting Open Source LLM Observability
Docker
Helicone is simple to self-host and update. To get started locally, just use our docker-compose file.
# Clone the repository
git clone https://github.com/Helicone/helicone.git
cd docker
cp .env.example .env
# Start the services
./helicone-compose.sh helicone up
Helm
For Enterprise workloads, we also have a production-ready Helm chart available. To access, contact us at enterprise@helicone.ai.
Manual (Not Recommended)
Manual deployment is not recommended. Please use Docker or Helm. If you must, follow the instructions here.
Architecture
Helicone is comprised of five services:
- Web: Frontend Platform (NextJS)
- Worker: Proxy Logging (Cloudflare Workers)
- Jawn: Dedicated Server for serving collecting logs (Express + Tsoa)
- Supabase: Application Database and Auth
- ClickHouse: Analytics Database
- Minio: Object Storage for logs.
Integrations 🔌
Inference Providers
| Integration | Supports | Description | | -------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | | AI Gateway | JS/TS, Python, cURL | Unified API for 100+ providers with intelligent routing, automatic fallbacks, and unified observability | Async Logging (OpenLLMetry) | JS/TS, Python | Asynchronous logging for multiple LLM platforms | | OpenAI | JS/TS, Python | Inference provider | | Azure OpenAI | JS/TS, Python | Inference provider | | Anthropic | JS/TS, Python | Inference provider | | Ollama | JS/TS | Run and use large language models locally | | AWS Bedrock | JS/TS | Inference provider | | Gemini API | JS/TS | Inference provider | | Gemini Vertex AI | JS/TS | Gemini models on Google Cloud's Vertex AI | | Vercel AI | JS/TS | AI SDK for building AI-powered applications | | Anyscale | JS/TS, Python | Inference provider | | TogetherAI | JS/TS, Python | Inference provider | - | | Hyperbolic | JS/TS, Python | Inference provider | High-performance AI inference platform | | Groq | [JS/TS, Python](https://www.helicone.ai/models?providers=gro
Related Skills
tmux
325.6kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
blogwatcher
325.6kMonitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
Unla
2.1k🧩 MCP Gateway - A lightweight gateway service that instantly transforms existing MCP Servers and APIs into MCP servers with zero code changes. Features Docker deployment and management UI, requiring no infrastructure modifications.
cursorrules-collection
110+ tested .mdc and .cursorrules files for Cursor AI. Validate with cursor-doctor, generate with rule-gen, convert with rule-porter.