SkillAgentSearch skills...

Dexter

LLM tools used in production at Dexa

Install / Use

/learn @dexaai/Dexter
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p> <a href="https://www.npmjs.com/package/@dexaai/dexter"><img alt="NPM" src="https://img.shields.io/npm/v/@dexaai/dexter.svg" /></a> <a href="https://github.com/dexaai/dexter/actions/workflows/test.yml"><img alt="Build Status" src="https://github.com/dexaai/dexter/actions/workflows/main.yml/badge.svg" /></a> <a href="https://github.com/dexaai/dexter/blob/main/license"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue" /></a> <a href="https://prettier.io"><img alt="Prettier Code Formatting" src="https://img.shields.io/badge/code_style-prettier-brightgreen.svg" /></a> </p>

Dexter

Dexter is a powerful TypeScript library for working with Large Language Models (LLMs), with a focus on real-world Retrieval-Augmented Generation (RAG) applications. It provides a set of tools and utilities to interact with various AI models, manage caching, handle embeddings, and implement AI functions.

Features

  • Comprehensive Model Support: Implementations for Chat, Completion, Embedding, and Sparse Vector models, with efficient OpenAI API integration via openai-fetch.

  • Advanced AI Function Utilities: Tools for creating and managing AI functions, including createAIFunction, createAIExtractFunction, and createAIRunner, with Zod integration for schema validation.

  • Structured Data Extraction: Dexter supports OpenAI's structured output feature through the createExtractFunction, which uses the response_format parameter with a JSON schema derived from a Zod schema.

  • Flexible Caching and Tokenization: Built-in caching system with custom cache support, and advanced tokenization based on tiktoken for accurate token management.

  • Robust Observability and Control: Customizable telemetry system, comprehensive event hooks, and specialized error handling for enhanced monitoring and control.

  • Performance Optimization: Built-in support for batching, throttling, and streaming, optimized for handling large-scale operations and real-time responses.

  • TypeScript-First and Environment Flexible: Fully typed for excellent developer experience, with minimal dependencies and compatibility across Node.js 18+, Deno, Cloudflare Workers, and Vercel edge functions.

Table of Contents

Installation

To install Dexter, use your preferred package manager:

npm install @dexaai/dexter

This package requires node >= 18 or an environment with fetch support.

This package exports ESM. If your project uses CommonJS, consider switching to ESM or use the dynamic import() function.

Usage

Here's a basic example of how to use the ChatModel:

import { ChatModel } from '@dexaai/dexter';

const chatModel = new ChatModel({
  params: { model: 'gpt-3.5-turbo' },
});

const response = await chatModel.run({
  messages: [{ role: 'user', content: 'Tell me a short joke' }],
});
console.log(response.message.content);
}

Examples

Chat Completion with Streaming

import { ChatModel, MsgUtil } from '@dexaai/dexter';

const chatModel = new ChatModel({
  params: { model: 'gpt-4' },
});

const response = await chatModel.run({
  messages: [MsgUtil.user('Write a short story about a robot learning to love')],
  handleUpdate: (chunk) => {
    process.stdout.write(chunk);
  },
});
console.log('\n\nFull response:', response.message.content);

Extracting Structured Data

import { ChatModel, createExtractFunction } from '@dexaai/dexter';
import { z } from 'zod';

const extractPeopleNames = createExtractFunction({
  chatModel: new ChatModel({ params: { model: 'gpt-4o-mini' } }),
  systemMessage: `You extract the names of people from unstructured text.`,
  name: 'people_names',
  schema: z.object({
    names: z.array(
      z.string().describe(
        `The name of a person from the message. Normalize the name by removing suffixes, prefixes, and fixing capitalization`
      )
    ),
  }),
});

const peopleNames = await extractPeopleNames(
  `Dr. Andrew Huberman interviewed Tony Hawk, an idol of Andrew Hubermans.`
);

console.log('peopleNames', peopleNames);
// => ['Andrew Huberman', 'Tony Hawk']

Using AI Functions

import { ChatModel, MsgUtil, createAIFunction } from '@dexaai/dexter';
import { z } from 'zod';

const getWeather = createAIFunction(
  {
    name: 'get_weather',
    description: 'Gets the weather for a given location',
    argsSchema: z.object({
      location: z.string().describe('The city and state e.g. San Francisco, CA'),
      unit: z.enum(['c', 'f']).optional().default('f').describe('The unit of temperature to use'),
    }),
  },
  async ({ location, unit }) => {
    // Simulate API call
    await new Promise((resolve) => setTimeout(resolve, 500));
    return {
      location,
      unit,
      temperature: Math.floor(Math.random() * 30) + 10,
      condition: ['sunny', 'cloudy', 'rainy'][Math.floor(Math.random() * 3)],
    };
  }
);

const chatModel = new ChatModel({
  params: {
    model: 'gpt-4',
    tools: [{ type: 'function', function: getWeather.spec }],
  },
});

const response = await chatModel.run({
  messages: [MsgUtil.user('What\'s the weather like in New York?')],
});
console.log(response.message);

Embedding Generation

import { EmbeddingModel } from '@dexaai/dexter';

const embeddingModel = new EmbeddingModel({
  params: { model: 'text-embedding-ada-002' },
});

const response = await embeddingModel.run({
  input: ['Hello, world!', 'How are you?'],
});
console.log(response.embeddings);

Repository Structure

The Dexter library is organized into the following main directories:

  • src/: Contains the source code for the library
    • model/: Core model implementations and utilities
    • ai-function/: AI function creation and handling
  • examples/: Contains example scripts demonstrating library usage
  • dist/: Contains the compiled JavaScript output (generated after build)

Key files:

  • src/model/chat.ts: Implementation of the ChatModel
  • src/model/completion.ts: Implementation of the CompletionModel
  • src/model/embedding.ts: Implementation of the EmbeddingModel
  • src/model/sparse-vector.ts: Implementation of the SparseVectorModel
  • src/ai-function/ai-function.ts: AI function creation utilities
  • src/model/utils/: Various utility functions and helpers

API Reference

ChatModel

The ChatModel class is used for interacting with chat-based language models.

Constructor

new ChatModel(args?: ChatModelArgs<CustomCtx>)
  • args: Optional configuration object
    • params: Model parameters (e.g., model, temperature)
    • client: Custom OpenAI client (optional)
    • cache: Cache implementation (optional)
    • context: Custom context object (optional)
    • events: Event handlers (optional)
    • debug: Enable debug logging (optional)

Methods

  • run(params: ChatModelRun, context?: CustomCtx): Promise<ChatModelResponse>
    • Executes the chat model with the given parameters and context
  • extend(args?: PartialChatModelArgs<CustomCtx>): ChatModel<CustomCtx>
    • Creates a new instance of the model with modified configuration

CompletionModel

The CompletionModel class is used for text completion tasks.

Constructor

new CompletionModel(args?: CompletionModelArgs<CustomCtx>)
  • args: Optional configuration object (similar to ChatModel)

Methods

  • run(params: CompletionModelRun, context?: CustomCtx): Promise<CompletionModelResponse>
    • Executes the completion model with the given parameters and context
  • extend(args?: PartialCompletionModelArgs<CustomCtx>): CompletionModel<CustomCtx>
    • Creates a new instance of the model with modified configuration

EmbeddingModel

The EmbeddingModel class is used for generating embeddings from text.

Constructor

new EmbeddingModel(args?: EmbeddingModelArgs<CustomCtx>)
  • args: Optional configuration object (similar to ChatModel)

Methods

  • run(params: EmbeddingModelRun, context?: CustomCtx): Promise<EmbeddingModelResponse>
    • Generates embeddings for the given input texts
  • extend(args?: PartialEmbeddingModelArgs<CustomCtx>): EmbeddingModel<CustomCtx>
    • Creates a new instance of the model with modified configuration

SparseVectorModel

The SparseVectorModel class is used for generating sparse vector representations.

Constructor

new SparseVectorModel(args: SparseVectorModelArgs<CustomCtx>)
  • args: Configuration object
    • serviceUrl: URL of the SPLADE service (required)
    • Other options similar to ChatModel

Methods

  • run(params: SparseVectorModelRun, context?: CustomCtx): Promise<SparseVectorModelResponse>
    • Generates sparse vector representations for the given input texts
  • extend(args?: PartialSparseVectorModelArgs<CustomCtx>): SparseVectorModel<CustomCtx>
    • Creates a new instance of the model with modified configuration

Extract Functions

createExtractFunction

Creates a function to extract structured data from text using OpenAI's structured output feat

View on GitHub
GitHub Stars108
CategoryDevelopment
Updated4d ago
Forks10

Languages

TypeScript

Security Score

100/100

Audited on Apr 7, 2026

No findings