Api
Instant API: Build type-safe web APIs with JavaScript
Install / Use
/learn @instant-dev/ApiREADME
Instant API
Build type-safe web APIs with JavaScript, instantly
Spec generation and LLM streaming
Instant API is a framework for building APIs with JavaScript that implements
type-safety at the HTTP interface. By doing so, it eliminates the need for
schema validation libraries entirely. Simply write a JSDoc-compliant comment
block for a function that represents your API endpoint and stop worrying about
validation, sanitization and testing for user input. The OpenAPI specification
for your API is then automatically generated in both JSON and YAML at
localhost:8000/.well-known/openapi.json and
localhost:8000/.well-known/openapi.yaml, respectively.
Additionally, Instant API comes a number of features optimized for integrations with LLMs and chat bots:
- First class support for Server-Sent Events using
text/event-streammakes streaming LLM responses easy - LLM function calling can be integrated
easily via JSON schema output at
localhost:8000/.well-known/schema.json - Experimental auto-generation of
localhost:8000/.well-known/ai-plugin.json - The ability to instantly return
200 OKresponses and execute in the background for Slack, Discord webhooks
You will find Instant API is a very full-featured framework despite being an early release. It has been in development for six years as the engine behind the Autocode serverless platform where it has horizontally scaled to handle over 100M API requests per day.
Quick example: Standard API
Here's an example API endpoint built with Instant API. It would be available
at the URL example.com/v1/weather/current via HTTP GET. It has length
restrictions on location, range restrictions on coords.lat and coords.lng,
and tags is an array of string. The @returns definitions ensure that the API
contract with the user is upheld: if the wrong data is returned an error will be
thrown.
File: /functions/v1/weather/current.mjs
/**
* Retrieve the weather for a specific location
* @param {?string{1..64}} location Search by location
* @param {?object} coords Provide specific latitude and longitude
* @param {number{-90,90}} coords.lat Latitude
* @param {number{-180,180}} coords.lng Longitude
* @param {string[]} tags Nearby locations to include
* @returns {object} weather Your weather result
* @returns {number} weather.temperature Current tmperature of the location
* @returns {string} weather.unit Fahrenheit or Celsius
*/
export async function GET (location = null, coords = null, tags = []) {
if (!location && !coords) {
// Prefixing an error message with a "###:" between 400 and 404
// automatically creates the correct client error:
// BadRequestError, UnauthorizedError, PaymentRequiredError,
// ForbiddenError, NotFoundError
// Otherwise, will throw a RuntimeError with code 420
throw new Error(`400: Must provide either location or coords`);
} else if (location && coords) {
throw new Error(`400: Can not provide both location and coords`);
}
// Fetch your own API data
await getSomeWeatherDataFor(location, coords, tags);
// mock a response
return {
temperature: 89.2
units: `°F`
};
}
Quick example: LLM Streaming
LLM streaming is simple. It relies on a special context object and defining
@stream parameters to create a text/event-stream response. You can think
of @stream as similar to @returns, where you're specifying the schema
for the output to the user. If this contract is broken, your API will throw an
error. In order to send a stream to the user, we add a special context object
to the API footprint as the last parameter and use an exposed context.stream()
method.
File: /functions/v1/ai-helper.mjs
import OpenAI from 'openai';
const openai = new OpenAI(process.env.OPENAI_API_KEY);
/**
* Streams results for our lovable assistant
* @param {string} query The question for our assistant
* @stream {object} chunk
* @stream {string} chunk.id
* @stream {string} chunk.object
* @stream {integer} chunk.created
* @stream {string} chunk.model
* @stream {object[]} chunk.choices
* @stream {integer} chunk.choices[].index
* @stream {object} chunk.choices[].delta
* @stream {?string} chunk.choices[].delta.role
* @stream {?string} chunk.choices[].delta.content
* @returns {object} message
* @returns {string} message.content
*/
export async function GET (query, context) {
const completion = await openai.chat.completions.create({
messages: [
{role: `system`, content: `You are a lovable, cute assistant that uses too many emojis.`},
{role: `user`, content: query}
],
model: `gpt-3.5-turbo`,
stream: true
});
const messages = [];
for await (const chunk of completion) {
// Stream our response as text/event-stream when ?_stream parameter added
context.stream('chunk', chunk); // chunk has the schema provided above
messages.push(chunk?.choices?.[0]?.delta?.content || '');
}
return {content: messages.join('')};
};
By default, this method will return something like;
{
"content": "Hey there! 💁♀️ I'm doing great, thank you! 💖✨ How about you? 😊🌈"
}
However, if you append ?_stream to query parameters or {"_stream": true} to
body parameters, it will turn into a text/event-stream with your context.stream()
events sandwiched between a @begin and @response event. The @response event
will be an object containing the details of what the HTTP response would have contained
had the API call been made normally.
id: 2023-10-25T04:29:59.115000000Z/2e7c7860-4a66-4824-98fa-a7cf71946f19
event: @begin
data: "2023-10-25T04:29:59.115Z"
[... more events ...]
event: chunk
data: {"id":"chatcmpl-8DPoluIgN4TDIuE1usFOKTLPiIUbQ","object":"chat.completion.chunk","created":1698208199,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":" 💯"},"finish_reason":null}]}
[... more events ...]
event: @response
data: {"statusCode":200,"headers":{"X-Execution-Uuid":"2e7c7860-4a66-4824-98fa-a7cf71946f19","X-Instant-Api":"true","Access-Control-Allow-Origin":"*","Access-Control-Allow-Methods":"GET, POST, OPTIONS, HEAD, PUT, DELETE","Access-Control-Allow-Headers":"","Access-Control-Expose-Headers":"x-execution-uuid, x-instant-api, access-control-allow-origin, access-control-allow-methods, access-control-allow-headers, x-execution-uuid","Content-Type":"application/json"},"body":"{\"content\":\"Hey there! 🌞 I'm feeling 💯 today! Full of energy and ready to help you out. How about you? How are you doing? 🌈😊\"}"}
Table of Contents
- Getting Started
- Endpoints and Type Safety
- OpenAPI Specification Generation
- Streaming and LLM Support
- Background execution for webhooks and chatbots
@backgrounddirective- [Using the
_backgroundparameter](#using-the
Related Skills
gh-issues
344.1kFetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]
node-connect
344.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
oracle
344.1kBest practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
tmux
344.1kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
