SkillAgentSearch skills...

Gateway

A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.

Install / Use

/learn @Portkey-AI/Gateway

README

<p align="right"> <strong>English</strong> | <a href="./.github/README.cn.md">中文</a> | <a href="./.github/README.jp.md">日本語</a> </p>

[!IMPORTANT] :rocket: Gateway 2.0 (Pre-Release) Portkey's core enterprise gateway is merging into open-source with our 2.0 release. You can try the pre-release branch here. Read more about what's next for Portkey in our Series A announcement.

<div align="center">

🆕 Portkey Models - Open-source LLM pricing for 2,300+ models across 40+ providers. Explore →

AI Gateway

Route to 250+ LLMs with 1 fast & friendly API

<img src="https://cfassets.portkey.ai/sdk.gif" width="550px" alt="Portkey AI Gateway Demo showing LLM routing capabilities" style="margin-left:-35px">

Docs | Enterprise | Hosted Gateway | Changelog | API Reference

License Discord Twitter npm version Better Stack Badge

<a href="https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/quickcreate?stackName=portkey-gateway&templateURL=https://portkey-gateway-ec2-quicklaunch.s3.us-east-1.amazonaws.com/portkey-gateway-ec2-quicklaunch.template.yaml"><img src="https://img.shields.io/badge/Deploy_to_EC2-232F3E?style=for-the-badge&logo=amazonwebservices&logoColor=white" alt="Deploy to AWS EC2" width="105"/></a> Ask DeepWiki

</div> <br/>

The AI Gateway is designed for fast, reliable & secure routing to 1600+ language, vision, audio, and image models. It is a lightweight, open-source, and enterprise-ready solution that allows you to integrate with any language model in under 2 minutes.

  • [x] Blazing fast (<1ms latency) with a tiny footprint (122kb)
  • [x] Battle tested, with over 10B tokens processed everyday
  • [x] Enterprise-ready with enhanced security, scale, and custom deployments
<br>

What can you do with the AI Gateway?

<br><br>

[!TIP] Starring this repo helps more developers discover the AI Gateway 🙏🏻

star-2

<br> <br>

Quickstart (2 mins)

1. Setup your AI Gateway

# Run the gateway locally (needs Node.js and npm)
npx @portkey-ai/gateway

The Gateway is running on http://localhost:8787/v1

The Gateway Console is running on http://localhost:8787/public/

<sup> Deployment guides: &nbsp; <a href="https://portkey.wiki/gh-18"><img height="12" width="12" src="https://cfassets.portkey.ai/logo/dew-color.svg" /> Portkey Cloud (Recommended)</a> &nbsp; <a href="./docs/installation-deployments.md#docker"><img height="12" width="12" src="https://cdn.simpleicons.org/docker/3776AB" /> Docker</a> &nbsp; <a href="./docs/installation-deployments.md#nodejs-server"><img height="12" width="12" src="https://cdn.simpleicons.org/node.js/3776AB" /> Node.js</a> &nbsp; <a href="./docs/installation-deployments.md#cloudflare-workers"><img height="12" width="12" src="https://cdn.simpleicons.org/cloudflare/3776AB" /> Cloudflare</a> &nbsp; <a href="./docs/installation-deployments.md#replit"><img height="12" width="12" src="https://cdn.simpleicons.org/replit/3776AB" /> Replit</a> &nbsp; <a href="./docs/installation-deployments.md"> Others...</a> </sup>

2. Make your first request

<!-- <details open> <summary>Python Example</summary> -->
# pip install -qU portkey-ai

from portkey_ai import Portkey

# OpenAI compatible client
client = Portkey(
    provider="openai", # or 'anthropic', 'bedrock', 'groq', etc
    Authorization="sk-***" # the provider API key
)

# Make a request through your AI Gateway
client.chat.completions.create(
    messages=[{"role": "user", "content": "What's the weather like?"}],
    model="gpt-4o-mini"
)

<sup>Supported Libraries:   <img height="12" width="12" src="https://cdn.simpleicons.org/javascript/3776AB" /> JS   <img height="12" width="12" src="https://cdn.simpleicons.org/python/3776AB" /> Python   <img height="12" width="12" src="https://cdn.simpleicons.org/gnubash/3776AB" /> REST   <img height="12" width="12" src="https://cdn.simpleicons.org/openai/3776AB" /> OpenAI SDKs   <img height="12" width="12" src="https://cdn.simpleicons.org/langchain/3776AB" /> Langchain   LlamaIndex   Autogen   CrewAI   More.. </sup>

On the Gateway Console (http://localhost:8787/public/) you can see all of your local logs in one place.

<img src="https://github.com/user-attachments/assets/362bc916-0fc9-43f1-a39e-4bd71aac4a3a" width="400" />

3. Routing & Guardrails

Configs in the LLM gateway allow you to create routing rules, add reliability and setup guardrails.

config = {
  "retry": {"attempts": 5},

  "output_guardrails": [{
    "default.contains": {"operator": "none", "words": ["Apple"]},
    "deny": True
  }]
}

# Attach the config to the client
client = client.with_options(config=config)

client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Reply randomly with Apple or Bat"}]
)

# This would always response with "Bat" as the guardrail denies all replies containing "Apple". The retry config would retry 5 times before giving up.
<div align="center"> <img src="https://portkey.ai/blog/content/images/size/w1600/2024/11/image-15.png" width=600 title="Request flow through Portkey's AI gateway with retries and guardrails" alt="Request flow through Portkey's AI gateway with retries and guardrails"/> </div>

You can do a lot more stuff with configs in your AI gateway. Jump to examples →

<br/>

Enterprise Version (Private deployments)

<sup>

<img height="12" width="12" src="https://cfassets.portkey.ai/amazon-logo.svg" /> AWS   <img height="12" width="12" src="https://cfassets.portkey.ai/azure-logo.svg" /> Azure   <img height="12" width="12" src="https://cdn.simpleicons.org/googlecloud/3776AB" /> GCP   <img height="12" width="12" src="https://cdn.simpleicons.org/redhatopenshift/3776AB" /> OpenShift   <img height="12" width="12" src="https://cdn.simpleicons.org/kubernetes/3776AB" /> Kubernetes

</sup>

The LLM Gateway's enterprise version offers advanced capabilities for org management, governance, security and more out of the box. View Feature Comparison →

The enterprise deployment architecture for supported platforms is available here - Enterprise Private Cloud Deployments

<a href="https://portkey.sh/demo-13"><img src="https://portkey.ai/blog/content/images/2024/08/Get-API-Key--5-.png" height=50 alt="Book an enterprise AI gateway demo" /></a><br/>

<br>

MCP Gateway

MCP Gateway provides a centralized control plane for managing MCP (Model Context Protocol) servers across your organization.

  • Authentication — Single auth layer at the gateway. Users authenticate once; your MCP servers receive verified requests
  • Access Control — Control which teams and users can access which servers and tools. Revoke access instantly
  • Observability — Every tool call logged with full context: who called what, parameters, response, latency
  • Identity Forwarding — Forward user identity (email, team, roles) to MCP servers automatically

Works with Claude Desktop, Cursor, VS Code, and any MCP-compatible client. Get started →

<br>

Core Features

Reliable Routing

  • <a href="https://portkey.wiki/gh-37">Fallbacks</a>: Fallback to another provider or model on failed requests using the LLM gateway. You can specify the errors on which to trigger the fallback. Improves reliability of your application.
  • <a href="https://portkey.wiki/gh-38">Automatic Retries</a>: Automatically retry failed requests up to 5 times. An exponential backoff strategy spaces out retry attempts to prevent network overload.
View on GitHub
GitHub Stars11.0k
CategoryDevelopment
Updated2h ago
Forks951

Languages

TypeScript

Security Score

100/100

Audited on Mar 21, 2026

No findings