Codong
AI native programming language one correct way to write everything
Install / Use
/learn @brettinhere/CodongREADME
Releases
| Version | Date | Highlights | |---------|------|-----------| | v0.1.3 | 2026-03-28 | Compilation cache (170× speedup), language completeness, 1,427 tests passing | | v0.1.1 | 2026-03-26 | 58 bug fixes, 100% pass rate on core test suite |
Why Codong
Most programming languages were designed for humans to write and machines to execute. Codong is designed for AI to write, humans to review, and machines to execute. It removes the three largest sources of friction in AI-generated code.
Problem 1: Choice Paralysis Burns Tokens
Python has five or more ways to make an HTTP request. Every choice costs tokens and produces unpredictable output. Codong has exactly one way to do everything.
| Task | Python Options | Codong |
|------|---------------|--------|
| HTTP request | requests, urllib, httpx, aiohttp, http.client | http.get(url) |
| Web server | Flask, FastAPI, Django, Starlette, Tornado | web.serve(port: N) |
| Database | SQLAlchemy, psycopg2, pymongo, peewee, Django ORM | db.connect(url) |
| JSON parse | json.loads, orjson, ujson, simplejson | json.parse(s) |
Problem 2: Errors Are Unreadable to AI
Stack traces are designed for humans. An AI agent spends hundreds of tokens parsing
Traceback (most recent call last) before it can attempt a fix. In Codong, every error is
structured JSON with a fix field that tells the AI exactly what to do.
{
"error": "db.find",
"code": "E2001_NOT_FOUND",
"message": "table 'users' not found",
"fix": "run db.migrate() to create the table",
"retry": false
}
Problem 3: Package Selection Wastes Context
Before writing business logic, an AI must choose an HTTP library, a database driver, a JSON parser, resolve version conflicts, and configure them. Codong ships eight built-in modules that cover 90% of AI workloads. No package manager required.
The Result: 70%+ Token Savings
| Token Cost | Python/JS | Codong | Savings | |-----------|-----------|--------|---------| | Choose HTTP framework | ~300 | 0 | 100% | | Choose database ORM | ~400 | 0 | 100% | | Parse error messages | ~500 | ~50 | 90% | | Resolve package versions | ~800 | 0 | 100% | | Write business logic | ~800 | ~800 | 0% | | Total | ~2,800 | ~850 | ~70% |
Arena Benchmark: Codong vs. Established Languages
When an AI model writes the same application in different languages, Codong produces dramatically less code, fewer tokens, and finishes faster. These numbers come from Codong Arena, where any model writes the same spec in every language and the results are measured automatically.
<p align="center"> <img src="docs/images/arena-benchmark.svg" alt="Codong Arena Benchmark — Posts CRUD with tags, search, pagination" width="100%" /> <br /> <sub>Live benchmark: Claude Sonnet 4 generating a Posts CRUD API with tags, search, and pagination. <a href="https://codong.org/arena/">Run it yourself</a></sub> </p>| Metric | Codong | Python | JavaScript | Java | Go | |--------|--------|--------|------------|------|-----| | Total Tokens | 955 | 1,867 | 1,710 | 4,367 | 3,270 | | Generation Time | 8.6s | 15.3s | 13.7s | 37.4s | 26.6s | | Code Lines | 10 | 143 | 147 | 337 | 289 | | Est. Cost | $0.012 | $0.025 | $0.022 | $0.062 | $0.046 | | Output Tokens | 722 | 1,597 | 1,439 | 4,096 | 3,001 | | vs Codong | -- | +121% | +99% | +467% | +316% |
Run your own benchmark: codong.org/arena
Quick Start in 30 Seconds
# 1. Install
curl -fsSL https://raw.githubusercontent.com/brettinhere/Codong/main/install.sh | sh
# 2. Write your first program
echo 'print("Hello, Codong!")' > hello.cod
# 3. Run it
codong eval hello.cod
A web API in five lines:
web.get("/", fn(req) => web.json({message: "Hello from Codong"}))
web.get("/health", fn(req) => web.json({status: "ok"}))
server = web.serve(port: 8080)
codong run server.cod
# curl http://localhost:8080/
Let AI Write Codong -- Zero Installation Required
You do not need to install Codong to start using it. Send the
SPEC_FOR_AI.md file to any LLM (Claude, GPT, Gemini, LLaMA)
as a system prompt or context, and the AI can immediately write correct Codong code.
Step 1. Copy the contents of SPEC_FOR_AI.md (under 2,000 words).
Step 2. Paste it into your AI conversation as context:
[Paste SPEC_FOR_AI.md contents here]
Now write a Codong REST API that manages a user list with
CRUD operations and SQLite storage.
Step 3. The AI generates valid Codong code:
db.connect("sqlite:///users.db")
db.create_table("users", {id: "integer primary key autoincrement", name: "text", email: "text"})
server = web.serve(port: 8080)
server.get("/users", fn(req) { return web.json(db.find("users")) })
server.post("/users", fn(req) { return web.json(db.insert("users", req.body), 201) })
server.get("/users/:id", fn(req) { return web.json(db.find_one("users", {id: to_number(req.param("id"))})) })
server.delete("/users/:id", fn(req) { db.delete("users", {id: to_number(req.param("id"))}); return web.json({}, 204) })
This works because Codong was designed with a single, unambiguous syntax for every operation. The AI does not need to choose between frameworks, import styles, or competing patterns. One correct way to write everything.
| LLM Provider | Method | |-------------|--------| | Claude (Anthropic) | Paste SPEC into system prompt, or use Prompt Caching for repeated use | | GPT (OpenAI) | Paste SPEC as the first user message or system instruction | | Gemini (Google) | Paste SPEC as context in the conversation | | LLaMA / Ollama | Include SPEC in the system prompt via API or Ollama modelfile | | Any LLM | Works with any model that accepts a system prompt or context window |
Benchmark it yourself: Visit codong.org/arena to see real-time token consumption and generation speed comparisons between Codong and other languages.
Installation
curl -fsSL https://codong.org/install.sh | sh
Or download a binary directly from GitHub Releases:
| Platform | Binary | |----------|--------| | Linux x86_64 | codong-linux-amd64 | | Linux ARM64 | codong-linux-arm64 | | macOS Intel | codong-darwin-amd64 | | macOS Apple Silicon | codong-darwin-arm64 |
Requirements: codong eval works standalone. codong run and codong build require Go 1.22+.
Verify: codong version
Language Design
Codong is deliberately small. 23 keywords. 6 primitive types. One way to do each thing.
23 Keywords (Python: 35, JavaScript: 64, Java: 67)
fn return if else for while match
break continue const import export try catch
go select interface type null true false
in _
Variables
name = "Ada"
age = 30
active = true
nothing = null
const MAX_RETRIES = 3
No var, let, or :=. Assignment is =, always.
Functions
fn greet(name, greeting = "Hello") {
return "{greeting}, {name}!"
}
print(greet("Ada")) // Hello, Ada!
print(greet("Bob", greeting: "Hi")) // Hi, Bob!
double = fn(x) => x * 2 // arrow function
String Interpolation
name = "Ada"
print("Hello, {name}!") // variable
print("Total: {items.len()} items") // method call
print("Sum: {a + b}") // expression
print("{user.name} joined on {user.date}") // member access
Any expression is valid inside {}. No backticks, no f"...", no ${}.
Collections
items = [1, 2, 3, 4, 5]
doubled = items.map(fn(x) => x * 2)
evens = items.filter(fn(x) => x % 2 == 0)
total = items.reduce(fn(acc, x) => acc + x, 0)
user = {name: "Ada", age: 30}
user.email = "ada@example.com"
print(user.get("phone", "N/A")) // N/A
Control Flow
if score >= 90 {
print("A")
} else if score >= 80 {
print("B")
} else {
print("C")
}
for item in items {
print(item)
}
for i in range(0, 10) {
print(i)
}
while running {
data = poll()
}
match status {
200 => print("ok")
404 => print("not found")
_ => print("e
