Openlit
Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Management, Vault, Playground. 🚀💻 Integrates with 50+ LLM Providers, VectorDBs, Agent Frameworks and GPUs.
Install / Use
/learn @openlit/OpenlitREADME
Observability, Evaluations, Guardrails, Prompts, Vault, Playground, FleetHub
Open Source Platform for AI Engineering
Documentation | Quickstart | Python SDK | Typescript SDK |
❤️ Sponsor this project ❤️
</div>https://github.com/user-attachments/assets/6909bf4a-f5b4-4060-bde3-95e91fa36168
OpenLIT allows you to simplify your AI development workflow, especially for Generative AI and LLMs. It streamlines essential tasks like experimenting with LLMs, organizing and versioning prompts, and securely handling API keys. With just one line of code, you can enable OpenTelemetry-native observability, offering full-stack monitoring that includes LLMs, vector databases, and GPUs. This enables developers to confidently build AI features and applications, transitioning smoothly from testing to production.
This project proudly follows and maintains the Semantic Conventions with the OpenTelemetry community, consistently updating to align with the latest standards in Observability.
⚡ Features

-
📈 Analytics Dashboard: Monitor your AI application's health and performance with detailed dashboards that track metrics, costs, and user interactions, providing a clear view of overall efficiency.
-
🔌 OpenTelemetry-native Observability SDKs: Vendor-neutral SDKs to send traces and metrics to your existing observability tools.
-
💲 Cost Tracking for Custom and Fine-Tuned Models: Tailor cost estimations for specific models using custom pricing files for precise budgeting.
-
🐛 Exceptions Monitoring Dashboard: Quickly spot and resolve issues by tracking common exceptions and errors with a dedicated monitoring dashboard.
-
💭 Prompt Management: Manage and version prompts using Prompt Hub for consistent and easy access across applications.
-
🔑 API Keys and Secrets Management: Securely handle your API keys and secrets centrally, avoiding insecure practices.
-
🎮 Experiment with different LLMs: Use OpenGround to explore, test and compare various LLMs side by side.
-
🚀 Fleet Hub for OpAMP Management: Centrally manage and monitor OpenTelemetry Collectors across your infrastructure using the OpAMP (Open Agent Management Protocol) with secure TLS communication.
🚀 Getting Started with LLM Observability
flowchart TB;
subgraph " "
direction LR;
subgraph " "
direction LR;
OpenLIT_SDK[OpenLIT SDK] -->|Sends Traces & Metrics| OTC[OpenTelemetry Collector];
OTC -->|Stores Data| ClickHouseDB[ClickHouse];
end
subgraph " "
direction RL;
OpenLIT_UI[OpenLIT] -->|Pulls Data| ClickHouseDB;
end
end
Step 1: Deploy OpenLIT Stack
-
Git Clone OpenLIT Repository
Open your command line or terminal and run:
git clone git@github.com:openlit/openlit.git -
Self-host using Docker
Deploy and run OpenLIT with the following command:
docker compose up -d
For instructions on installing in Kubernetes using Helm, refer to the Kubernetes Helm installation guide.
Step 2: Install OpenLIT SDK
Open your command line or terminal and run:
pip install openlit
For instructions on using the TypeScript SDK, visit the TypeScript SDK Installation guide.
Step 3: Initialize OpenLIT in your Application
Integrate OpenLIT into your AI applications by adding the following lines to your code.
import openlit
openlit.init()
Configure the telemetry data destination as follows:
| Purpose | Parameter/Environment Variable | For Sending to OpenLIT |
| ---------------------------------- | ------------------------------------------------ | ------------------------- |
| Send data to an HTTP OTLP endpoint | otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT | "http://127.0.0.1:4318" |
| Authenticate telemetry backends | otlp_headers or OTEL_EXPORTER_OTLP_HEADERS | Not required by default |
💡 Info: If the
otlp_endpointorOTEL_EXPORTER_OTLP_ENDPOINTis not provided, the OpenLIT SDK will output traces directly to your console, which is recommended during the development phase.
Example
<details> <summary>Initialize using Function Arguments</summary>
Add the following two lines to your application code:
import openlit
openlit.init(
otlp_endpoint="http://127.0.0.1:4318",
)
</details>
<details>
<summary>Initialize using Environment Variables</summary>
Add the following two lines to your application code:
import openlit
openlit.init()
Then, configure the your OTLP endpoint using environment variable:
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
</details>
Step 4: Visualize and Optimize
With the Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your AI application's performance, behavior, and identify areas of improvement.
Just head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials:
- Email:
user@openlit.io - Password:
openlituser

📦 Supported Integrations
OpenLIT auto-instruments 44+ LLM providers, AI frameworks, and vector databases with a single line of code. Each integration produces OpenTelemetry-native traces and metrics. Click any card to view the integration docs.
<sub> <img src="https://img.shields.io/badge/Python-3776AB?logo=python&logoColor=white&style=flat-square" height="16"> Python SDK <img src="https://img.shields.io/badge/TypeScript-3178C6?logo=typescript&logoColor=white&style=flat-square" height="16"> TypeScript SDK </sub>