SkillAgentSearch skills...

Api

Instant API: Build type-safe web APIs with JavaScript

Install / Use

/learn @instant-dev/Api
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Instant API

travis-ci build npm version

Build type-safe web APIs with JavaScript, instantly

Spec generation and LLM streaming

Instant API is a framework for building APIs with JavaScript that implements type-safety at the HTTP interface. By doing so, it eliminates the need for schema validation libraries entirely. Simply write a JSDoc-compliant comment block for a function that represents your API endpoint and stop worrying about validation, sanitization and testing for user input. The OpenAPI specification for your API is then automatically generated in both JSON and YAML at localhost:8000/.well-known/openapi.json and localhost:8000/.well-known/openapi.yaml, respectively.

Additionally, Instant API comes a number of features optimized for integrations with LLMs and chat bots:

  • First class support for Server-Sent Events using text/event-stream makes streaming LLM responses easy
  • LLM function calling can be integrated easily via JSON schema output at localhost:8000/.well-known/schema.json
  • Experimental auto-generation of localhost:8000/.well-known/ai-plugin.json
  • The ability to instantly return 200 OK responses and execute in the background for Slack, Discord webhooks

You will find Instant API is a very full-featured framework despite being an early release. It has been in development for six years as the engine behind the Autocode serverless platform where it has horizontally scaled to handle over 100M API requests per day.

Quick example: Standard API

Here's an example API endpoint built with Instant API. It would be available at the URL example.com/v1/weather/current via HTTP GET. It has length restrictions on location, range restrictions on coords.lat and coords.lng, and tags is an array of string. The @returns definitions ensure that the API contract with the user is upheld: if the wrong data is returned an error will be thrown.

File: /functions/v1/weather/current.mjs

/**
 * Retrieve the weather for a specific location
 * @param {?string{1..64}}   location    Search by location
 * @param {?object}          coords      Provide specific latitude and longitude
 * @param {number{-90,90}}   coords.lat  Latitude
 * @param {number{-180,180}} coords.lng  Longitude
 * @param {string[]}         tags        Nearby locations to include
 * @returns {object} weather             Your weather result
 * @returns {number} weather.temperature Current tmperature of the location
 * @returns {string} weather.unit        Fahrenheit or Celsius
 */
export async function GET (location = null, coords = null, tags = []) {

  if (!location && !coords) {
    // Prefixing an error message with a "###:" between 400 and 404
    //   automatically creates the correct client error:
    //     BadRequestError, UnauthorizedError, PaymentRequiredError,
    //     ForbiddenError, NotFoundError
    // Otherwise, will throw a RuntimeError with code 420
    throw new Error(`400: Must provide either location or coords`);
  } else if (location && coords) {
    throw new Error(`400: Can not provide both location and coords`);
  }

  // Fetch your own API data
  await getSomeWeatherDataFor(location, coords, tags);

  // mock a response
  return {
    temperature: 89.2
    units: `°F`
  };

}

Quick example: LLM Streaming

LLM streaming is simple. It relies on a special context object and defining @stream parameters to create a text/event-stream response. You can think of @stream as similar to @returns, where you're specifying the schema for the output to the user. If this contract is broken, your API will throw an error. In order to send a stream to the user, we add a special context object to the API footprint as the last parameter and use an exposed context.stream() method.

File: /functions/v1/ai-helper.mjs

import OpenAI from 'openai';
const openai = new OpenAI(process.env.OPENAI_API_KEY);

/**
 * Streams results for our lovable assistant
 * @param {string} query The question for our assistant
 * @stream {object}   chunk
 * @stream {string}   chunk.id
 * @stream {string}   chunk.object
 * @stream {integer}  chunk.created
 * @stream {string}   chunk.model
 * @stream {object[]} chunk.choices
 * @stream {integer}  chunk.choices[].index
 * @stream {object}   chunk.choices[].delta
 * @stream {?string}  chunk.choices[].delta.role
 * @stream {?string}  chunk.choices[].delta.content
 * @returns {object} message
 * @returns {string} message.content
 */
export async function GET (query, context) {
  const completion = await openai.chat.completions.create({
    messages: [
      {role: `system`, content: `You are a lovable, cute assistant that uses too many emojis.`},
      {role: `user`, content: query}
    ],
    model: `gpt-3.5-turbo`,
    stream: true
  });
  const messages = [];
  for await (const chunk of completion) {
    // Stream our response as text/event-stream when ?_stream parameter added
    context.stream('chunk', chunk); // chunk has the schema provided above
    messages.push(chunk?.choices?.[0]?.delta?.content || '');
  }
  return {content: messages.join('')};
};

By default, this method will return something like;

{
  "content": "Hey there! 💁‍♀️ I'm doing great, thank you! 💖✨ How about you? 😊🌈"
}

However, if you append ?_stream to query parameters or {"_stream": true} to body parameters, it will turn into a text/event-stream with your context.stream() events sandwiched between a @begin and @response event. The @response event will be an object containing the details of what the HTTP response would have contained had the API call been made normally.

id: 2023-10-25T04:29:59.115000000Z/2e7c7860-4a66-4824-98fa-a7cf71946f19
event: @begin
data: "2023-10-25T04:29:59.115Z"

[... more events ...]

event: chunk
data: {"id":"chatcmpl-8DPoluIgN4TDIuE1usFOKTLPiIUbQ","object":"chat.completion.chunk","created":1698208199,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":" 💯"},"finish_reason":null}]}

[... more events ...]

event: @response
data: {"statusCode":200,"headers":{"X-Execution-Uuid":"2e7c7860-4a66-4824-98fa-a7cf71946f19","X-Instant-Api":"true","Access-Control-Allow-Origin":"*","Access-Control-Allow-Methods":"GET, POST, OPTIONS, HEAD, PUT, DELETE","Access-Control-Allow-Headers":"","Access-Control-Expose-Headers":"x-execution-uuid, x-instant-api, access-control-allow-origin, access-control-allow-methods, access-control-allow-headers, x-execution-uuid","Content-Type":"application/json"},"body":"{\"content\":\"Hey there! 🌞 I'm feeling 💯 today! Full of energy and ready to help you out. How about you? How are you doing? 🌈😊\"}"}

Table of Contents

  1. Getting Started
    1. Quickstart
    2. Custom installation
  2. Endpoints and Type Safety
    1. Creating Endpoints
    2. Responding to HTTP methods
      1. Endpoint lifecycle
      2. Typing your endpoint
        1. Undocumented parameters
        2. Required parameters
        3. Optional parameters
      3. context object
      4. API endpoints: functions/ directory
        1. Index routing with index.mjs
        2. Subdirectory routing with 404.mjs
      5. Static files: www/ directory
        1. Index routing with index.html
        2. Subdirectory routing with 404.html
    3. Type Safety
      1. Supported types
      2. Type coercion
      3. Combining types
      4. Enums and restricting to specific values
      5. Sizes (lengths)
      6. Ranges
      7. Arrays
      8. Object schemas
    4. Parameter validation
      1. Query and Body parsing with application/x-www-form-urlencoded
      2. Query vs. Body parameters
    5. CORS (Cross-Origin Resource Sharing)
    6. Returning responses
      1. @returns type safety
      2. Error responses
      3. Custom HTTP responses
      4. Returning files with Buffer responses
      5. Streaming responses
      6. Debug responses
    7. Throwing errors
  3. OpenAPI Specification Generation
    1. OpenAPI Output Example
    2. JSON Schema Output Example
    3. Hiding endpoints with @private
  4. Streaming and LLM Support
    1. @stream type safety
    2. Using context.stream()
    3. Using the _stream parameter
      1. Selectively listening to specific streams
  5. Background execution for webhooks and chatbots
    1. @background directive
    2. [Using the _background parameter](#using-the

Related Skills

View on GitHub
GitHub Stars253
CategoryDevelopment
Updated18d ago
Forks8

Languages

JavaScript

Security Score

100/100

Audited on Mar 13, 2026

No findings