TurboAPI
FastAPI-compatible Python framework with Zig HTTP core; 7x faster, free-threading native
Install / Use
/learn @justrach/TurboAPIREADME
Status
Alpha software — read before using in production
TurboAPI works and has 275+ passing tests, but:
- No TLS — put nginx or Caddy in front for HTTPS
- No slow-loris protection — requires a reverse proxy with read timeouts
- No configurable max body size — hardcoded 16MB cap
- WebSocket support is in progress, not production-ready
- HTTP/2 is not yet implemented
- Free-threaded Python 3.14t is itself relatively new — some C extensions may not be thread-safe
See SECURITY.md for the full threat model and deployment recommendations.
| What works today | What's in progress | |--------------------------------------------------------|------------------------------------------| | ~140k req/s on uncached HTTP routes (~16x FastAPI) | WebSocket support | | FastAPI-compatible route decorators | HTTP/2 and TLS | | Zig HTTP server with 24-thread pool + keep-alive | Cloudflare Workers WASM target | | Zig-native JSON schema validation (dhi) | Fiber-based concurrency (via zag) | | Zero-alloc response pipeline (stack buffers) | | | Zig-native CORS (0% overhead, pre-rendered headers) | | | Response caching for noargs handlers | | | Static routes (pre-rendered at startup) | | | Async handler support | | | Full security stack (OAuth2, Bearer, API Key) | | | Python 3.14t free-threaded support | | | Native FFI handlers (C/Zig, no Python at all) | | | Fuzz-tested HTTP parser, router, validator | |
⚡ Quick Start
Requirements: Python 3.14+ free-threaded (3.14t), Zig 0.15+
Option 1: Docker (easiest)
git clone https://github.com/justrach/turboAPI.git
cd turboAPI
docker compose up
This builds Python 3.14t from source, compiles the Zig backend, and runs the example app. Hit http://localhost:8000 to verify.
Option 2: Local install
# Install free-threaded Python
uv python install 3.14t
# Install turboapi
pip install turboapi
# Or build from source (see below)
from turboapi import TurboAPI
from dhi import BaseModel
app = TurboAPI()
class Item(BaseModel):
name: str
price: float
quantity: int = 1
@app.get("/")
def hello():
return {"message": "Hello World"}
@app.get("/items/{item_id}")
def get_item(item_id: int):
return {"item_id": item_id, "name": "Widget"}
@app.post("/items")
def create_item(item: Item):
return {"item": item.model_dump(), "created": True}
if __name__ == "__main__":
app.run()
python3.14t app.py
# 🚀 TurboNet-Zig server listening on 127.0.0.1:8000
The app also exposes an ASGI __call__ fallback — you can use uvicorn main:app to test your route definitions before building the native backend, but this is pure-Python and much slower. For production, always use app.run() with the compiled Zig backend.
What's New
v1.0.27 — release guardrails
Re-cut the patch release after bad v1.0.25 and v1.0.26 publishes. v1.0.27 adds release guardrails so tag pushes fail on version drift and the manual release workflow updates every version declaration consistently.
v1.0.26 — release metadata fix
Re-cut the patch release after v1.0.25 published stale 1.0.24 artifacts. v1.0.26 syncs the package version across all release metadata files and adds a regression test so future release bumps fail fast if those files drift.
v1.0.25 — yxlyx compatibility cleanup
Fixed the top-level password helper exports so turboapi.hash_password and turboapi.verify_password stay coherent, and removed stale async-handler xfail markers for cases that already pass on current main. This closes issues #116 and #117.
v1.0.24 — Zig gzip passthrough fix
Restored gzip middleware body passthrough on the Zig runtime so compressed responses keep both the correct Content-Encoding: gzip header and the actual compressed body. This closes issue #96 on current main.
v1.0.23 — Shared Zig core (turboapi-core)
Extracted the radix trie router, HTTP utilities, and response cache into a standalone Zig library — turboapi-core. Both turboAPI and merjs now share the same routing and HTTP primitives. Zero performance regression (134k req/s unchanged).
v1.0.22 — Build fix
Refreshed the pinned dhi dependency hash so CI builds the turbonet extension on clean runners again.
v1.0.21 — Compat gap fixes
Restored custom exception handlers, lifespan callables, /docs + /openapi.json serving, router-level dependencies, and StaticFiles mounts in the TestClient/runtime path. Added exact repro coverage for issues #100–#104.
v1.0.01 — Performance (47k → 150k req/s)
Per-worker PyThreadState, PyObject_CallNoArgs for zero-arg handlers, tuple response ABI, zero-alloc sendResponse, single-parse model_sync, static routes, Zig-native CORS, enum handler dispatch, skip header parsing for simple routes, zero-alloc route params, response caching. See CHANGELOG.md for full details.
Benchmarks
Benchmarks are split into three categories and should not be mixed:
- HTTP-only framework overhead
- end-to-end HTTP + DB
- driver-only Postgres performance
All tables below use correct, identical response shapes and explicitly note when caches are disabled.
HTTP Throughput (no database, cache disabled)
| Endpoint | TurboAPI | FastAPI | Speedup | |---|---|---|---| | GET /health | 140,586/s | 11,264/s | 12.5x | | GET / | 149,930/s | 11,252/s | 13.3x | | GET /json | 147,167/s | 10,721/s | 13.7x | | GET /users/123 | 145,613/s | 9,775/s | 14.9x | | POST /items | 155,687/s | 8,667/s | 18.0x | | GET /status201 | 146,442/s | 11,991/s | 12.2x | | Average | | | 14.1x |
End-to-End HTTP + DB (uncached)
Same HTTP routes, same seeded Postgres dataset, TurboAPI response cache off, TurboAPI DB cache off, rate limiting off.
Primary table below is the median of 3 clean Docker reruns:
| Route | TurboAPI + pg.zig | FastAPI + asyncpg | FastAPI + SQLAlchemy | |---|---|---|---| | GET /health | 266,351/s | 9,161/s | 5,010/s | | GET /users/{id} varying 1000 IDs | 80,791/s | 5,203/s | 1,983/s | | GET /users?age_min=20 | 71,650/s | 3,162/s | 1,427/s | | GET /search?q=user_42% | 13,245/s | 3,915/s | 1,742/s |
3-run ranges:
- TurboAPI
GET /users/{id}:77,768..94,248/s - FastAPI + asyncpg
GET /users/{id}:4,973..5,464/s - FastAPI + SQLAlchemy
GET /users/{id}:1,896..2,054/s
Driver-Only Postgres
For pure driver comparisons with no HTTP in the loop, see benchmarks/pgbench/BENCHMARKS.md.
Caching
TurboAPI has two optional caching layers. Both can be disabled via environment variables:
| Cache | What it does | Env var to disable |
|---|---|---|
| Response cache | Caches handler return values after first call. Subsequent requests for the same route skip Python entirely. | TURBO_DISABLE_CACHE=1 |
| DB result cache | Caches SELECT query results with 30s TTL, 10K max entries, per-table invalidation on writes. | TURBO_DISABLE_DB_CACHE=1 |
| DB cache TTL | Override the default 30-second TTL. | TURBO_DB_CACHE_TTL=5 |
The HTTP-only numbers above are measured with response cache disabled (TURBO_DISABLE_CACHE=1). The end-to-end HTTP+DB table is measured with TURBO_DISABLE_CACHE=1, TURBO_DISABLE_DB_CACHE=1, and TURBO_DISABLE_RATE_LIMITING=1.
For database benchmarks, TURBO_DISABLE_DB_CACHE=1 will measure true per-request Postgres performance. With DB caching on, cached reads hit a Zig HashMap instead of Postgres — useful in production but not a fair framework comparison.
How it works
- Response caching: noargs handlers
