Aipipe
Gives anyone access to an OpenAI/OpenRouter API key free at 10 cents/week. Self-hostable. Useful as a backend if you're building pure front-end LLM apps
Install / Use
/learn @sanand0/AipipeREADME
AI Pipe
AI Pipe lets you build web apps that can access LLM APIs (e.g. OpenRouter, OpenAI, Gemini etc.) without a back-end.
An instance is hosted at https://aipipe.org/. You can host your own on CloudFlare. Licensed under MIT.
User Guide
Visit these pages:
- aipipe.org to understand how it works.
- aipipe.org/login with a Google Account to get your AI Pipe Token and track your usage.
- aipipe.org/playground to explore models and chat with them.
AI Pipe Token
You can use the AI Pipe Token from aipipe.org/login in any OpenAI API compatible application by setting:
OPENAI_API_KEYas your AI Pipe TokenOPENAI_BASE_URLashttps://aipipe.org/openai/v1
For example:
export OPENAI_API_KEY=$AIPIPE_TOKEN
export OPENAI_BASE_URL=https://aipipe.org/openai/v1
Now you can run:
uvx openai api chat.completions.create -m gpt-5-nano -g user "Hello"
... or:
uvx llm 'Hello' -m gpt-5-nano --key $AIPIPE_TOKEN
This will print something like Hello! How can I assist you today?
Native Provider API Keys
You can also use your own provider API keys directly (instead of an AI Pipe Token). This is useful if you want to:
- Bypass AI Pipe's cost tracking and budget limits
- Use models that aren't yet in AI Pipe's pricing database
- Handle billing directly with the provider
Simply use your native API key in the Authorization header:
# OpenAI with native key (starts with sk-)
curl https://aipipe.org/openai/v1/chat/completions \
-H "Authorization: Bearer sk-your-openai-key" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-5-nano", "messages": [{"role": "user", "content": "Hello"}]}'
# OpenRouter with native key (starts with sk-or-)
curl https://aipipe.org/openrouter/v1/chat/completions \
-H "Authorization: Bearer sk-or-your-openrouter-key" \
-H "Content-Type: application/json" \
-d '{"model": "openai/gpt-5-nano", "messages": [{"role": "user", "content": "Hello"}]}'
# Gemini with native key (starts with AIza)
curl https://aipipe.org/geminiv1beta/models/gemini-2.5-flash-lite:generateContent \
-H "Authorization: Bearer AIzaSyYourGeminiKey" \
-H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}'
Native keys are detected by their prefix pattern:
- OpenAI / OpenRouter: Keys starting with
sk- - Google Gemini: Keys starting with
AIza
Note: Native API keys cannot access /usage or /admin endpoints, which require an AI Pipe Token.
Developer Guide
Paste this code into index.html, open it in a browser, and check your DevTools Console
<script type="module">
import { getProfile } from "https://aipipe.org/aipipe.js";
const { token, email } = getProfile();
if (!token) window.location = `https://aipipe.org/login?redirect=${window.location.href}`;
const response = await fetch("https://aipipe.org/openrouter/v1/chat/completions", {
method: "POST",
headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json" },
body: JSON.stringify({
model: "openai/gpt-5-nano",
messages: [{ role: "user", content: "What is 2 + 2?" }],
}),
}).then((r) => r.json());
console.log(response);
</script>
This app will:
- Redirect the user to AI Pipe.
getProfile()setstokentonullsince it doesn't know the user.window.locationredirects the user tohttps://aipipe.org/loginwith?redirect=as your app URL
- Redirect them back to your app once they log in.
- Your app URL will have a
?aipipe_token=...&aipipe_email=...with the user's token and email getProfile()fetches these, stores them for future reference, and returnstokenandemail
- Make an LLM API call to OpenRouter or OpenAI and log the response.
- You can replace any call to
https://openrouter.ai/api/v1withhttps://aipipe.org/openrouter/v1and provideAuthorization: Bearer $AIPIPE_TOKENas a header. - Similarly, you can replace
https://api.openai.com/v1withhttps://aipipe.org/openai/v1and provideAuthorization: Bearer $AIPIPE_TOKENas a header. - AI Pipe replaces the token and proxies the request via the provider.
API
GET /usage: Returns usage data for specified email and time period
Example: Get usage for a user
curl https://aipipe.org/usage -H "Authorization: Bearer $AIPIPE_TOKEN"
Response:
{
"email": "user@example.com",
"days": 7,
"cost": 0.000137,
"usage": [{ "date": "2025-04-16", "cost": 0.000137 }],
"limit": 0.1
}
GET /proxy/[URL]: Proxies requests to the specified URL, bypassing CORS restrictions. No authentication required.
Example: Get contents of a URL
curl "https://aipipe.org/proxy/https://httpbin.org/get?x=1"
Response:
{
"args": { "x": "1" },
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/8.5.0"
},
"origin": "45.123.26.54",
"url": "https://httpbin.org/get?x=1"
}
Notes:
- The response includes the original URL in the
X-Proxy-URLheader - URLs must begin with
httporhttps - Requests timeout after 30 seconds
- All HTTP methods (GET, POST, etc.) and headers are preserved
- CORS headers are added for browser compatibility
GET token?credential=...: Converts a Google Sign-In credential into an AI Pipe token:
- When a user clicks "Sign in with Google" on the login page, Google's client library returns a JWT credential
- The login page sends this credential to
/token?credential=... - AI Pipe verifies the credential using Google's public keys
- If valid, AI Pipe signs a new token containing the user's email (and optional salt) using
AIPIPE_SECRET - Returns:
{ token, email ... }where additional fields come from Google's profile
OpenRouter API
GET /openrouter/*: Proxy requests to OpenRouter
Example: List Openrouter models
curl https://aipipe.org/openrouter/v1/models -H "Authorization: Bearer $AIPIPE_TOKEN"
Response:
{
"data": [
{
"id": "google/gemini-2.5-pro-preview-03-25",
"name": "Google: Gemini 2.5 Pro Preview"
// ...
}
]
}
Example: Make a chat completion request
curl https://aipipe.org/openrouter/v1/chat/completions \
-H "Authorization: Bearer $AIPIPE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "google/gemini-2.0-flash-lite-001", "messages": [{ "role": "user", "content": "What is 2 + 2?" }] }'
Response contains:
{ "choices": [{ "message": { "role": "assistant", "content": "..." } }] }
OpenRouter Image Generation
curl https://aipipe.org/openrouter/v1/chat/completions \
-H "Authorization: Bearer $AIPIPE_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemini-2.5-flash-image-preview",
"messages": [{"role": "user", "content": "Draw a cat"}],
"modalities": ["image", "text"]
}'
Response contains:
{
"choices": [
{
"message": {
"images": [
{
"type": "image_url",
"image_url": { "url": "data:image/png;base64,iVBORw0K..." }
}
]
}
}
]
}
OpenAI API
GET /openai/*: Proxy requests to OpenAI
AIPipe supports all OpenAI models that return usage data in their responses, enabling accurate cost tracking.
This includes chat completion models, audio preview models (e.g. gpt-4o-audio-preview), and transcription
models (e.g. gpt-4o-transcribe). Text-to-speech (TTS) models like tts-1 are not supported because they
return raw audio without usage metadata.
Example: List OpenAI models
curl https://aipipe.org/openai/v1/models -H "Authorization: Bearer $AIPIPE_TOKEN"
Response contains:
{
"object": "list",
"data": [
{
"id": "gpt-4o-audio-preview-2024-12-17",
"object": "model",
"created": 1734034239,
"owned_by": "system"
}
// ...
]
}
Example: Make a responses request
curl https://aipipe.org/openai/v1/responses \
-H "Authorization: Bearer $AIPIPE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-5-nano", "input": "What is 2 + 2?" }'
Response contains:
{
"output": [{
"role": "assistant",
"content": [{ "text": "2 + 2 equals 4." }] // ...
}]
}
Example: Create embeddings
curl https://aipipe.org/openai/v1/embeddings \
-H "Authorization: Bearer $AIPIPE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model": "text-embedding-3-small", "input": "What is 2 + 2?" }'
Response contains:
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [0.010576399, -0.037246477 // ...
]
}
],
"model": "text-embedding-3-small",
"usage": { "prompt_tokens": 8, "total_tokens": 8 }
}
Gemini API
GET /geminiv1beta/*: Proxy requests to Google's Gemini API
Example: Make a generateContent request
curl https://aipipe.org/geminiv1beta/models/gemini-2.5-flash-lite:generateContent \
-H "x-goog-api-key: $AIPIPE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"What is 2 + 2?"}]}]}'
Response contains:
{
"candidates": [{ "content": { "parts": [{ "text": "2 + 2 is 4."
