Openllmetry
Open-source observability for your GenAI or LLM application, based on OpenTelemetry
Install / Use
/learn @traceloop/OpenllmetryREADME
🎉 New: Our semantic conventions are now part of OpenTelemetry! Join the discussion and help us shape the future of LLM observability.
Looking for the JS/TS version? Check out OpenLLMetry-JS.
OpenLLMetry is a set of extensions built on top of OpenTelemetry that gives you complete observability over your LLM application. Because it uses OpenTelemetry under the hood, it can be connected to your existing observability solutions - Datadog, Honeycomb, and others.
It's built and maintained by Traceloop under the Apache 2.0 license.
The repo contains standard OpenTelemetry instrumentations for LLM providers and Vector DBs, as well as a Traceloop SDK that makes it easy to get started with OpenLLMetry, while still outputting standard OpenTelemetry data that can be connected to your observability stack. If you already have OpenTelemetry instrumented, you can just add any of our instrumentations directly.
🚀 Getting Started
The easiest way to get started is to use our SDK. For a complete guide, go to our docs.
Install the SDK:
pip install traceloop-sdk
Then, to start instrumenting your code, just add this line to your code:
from traceloop.sdk import Traceloop
Traceloop.init()
That's it. You're now tracing your code with OpenLLMetry! If you're running this locally, you may want to disable batch sending, so you can see the traces immediately:
Traceloop.init(disable_batch=True)
⏫ Supported (and tested) destinations
- ✅ Traceloop
- ✅ Axiom
- ✅ Azure Application Insights
- ✅ Braintrust
- ✅ Dash0
- ✅ Datadog
- ✅ Dynatrace
- ✅ Google Cloud
- ✅ Grafana
- ✅ Highlight
- ✅ Honeycomb
- ✅ HyperDX
- ✅ IBM Instana
- ✅ KloudMate
- ✅ Laminar
- ✅ New Relic
- ✅ OpenTelemetry Collector
- ✅ Oracle Cloud
- ✅ Scorecard
- ✅ Service Now Cloud Observability
- ✅ SigNoz
- ✅ Sentry
- ✅ Splunk
- ✅ Tencent Cloud
See our docs for instructions on connecting to each one.
🪗 What do we instrument?
OpenLLMetry can instrument everything that OpenTelemetry already instruments - so things like your DB, API calls, and more. On top of that, we built a set of custom extensions that instrument things like your calls to OpenAI or Anthropic, or your Vector DB like Chroma, Pinecone, Qdrant or Weaviate.
- ✅ Aleph Alpha
- ✅ Anthropic
- ✅ Bedrock (AWS)
- ✅ Cohere
- ✅ Google Generative AI (Gemini)
- ✅ Groq
- ✅ HuggingFace
- ✅ IBM Watsonx AI
- ✅ Mistral AI
- ✅ Ollama
- ✅ OpenAI / Azure OpenAI
- ✅ Replicate
- ✅ SageMaker (AWS)
- ✅ Together AI
- ✅ Vertex AI (GCP)
- ✅ WRITER
Vector DBs
Frameworks
- ✅ Agno
- ✅ AWS Strands (built-in OTEL support)
- ✅ CrewAI
- ✅ Haystack
- ✅ LangChain
- ✅ Langflow
- ✅ LangGraph
- ✅ LiteLLM
- ✅ LlamaIndex
- ✅ OpenAI Agents
Protocol
- ✅ MCP
🔎 Telemetry
We no longer log or collect any telemetry in the SDK or in the instrumentations. Make sure to bump to v0.49.2 and above.
Why we collect telemetry
- The primary purpose is to detect exceptions within instrumentations. Since LLM providers frequently update their APIs, this helps us quickly identify and fix any breaking changes.
- We only collect anonymous data, with no personally identifiable information. You can view exactly what data we collect in our Privacy documentation.
- Telemetry is only collected in the SDK. If you use the instrumentations directly without the SDK, no telemetry is collected.
🌱 Contributing
Whether big or small, we love contributions ❤️ Check out our guide to see how to get started.
Not sure where to get started? You can:
- Book a free pairing session with one of our teammates!
- Join our <a href="https://traceloop.com/slack">Slack</a>, and ask us any questions there.
💚 Community & Support
- Slack (For live discussion with the community and the Traceloop team)
- [GitHub Discussions](https://github.com/traceloop/openllmetry/discussions
Related Skills
tmux
334.1kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
claude-opus-4-5-migration
82.1kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
334.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
blogwatcher
334.1kMonitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
