Llume
No description available
Install / Use
/learn @uxname/LlumeREADME
LLume
LLume is a lightweight, type-safe Node.js framework designed to streamline the creation and execution of structured, predictable interactions with Large Language Models (LLMs). It emphasizes developer experience through strong typing, clear abstractions, and built-in utilities for common LLM workflow patterns.
TLDR - Quick Examples
Simple AI calculator
import { z } from "zod";
import { createAiFunction } from "llume";
// 1. Define schemas
const schemas = {
input: z.object({
expression: z.string()
}),
output: z.object({
result: z.number().describe("The numerical result of the calculation")
})
};
// 2. Create AI function
const calculate = createAiFunction({
functionId: "calculator",
inputSchema: schemas.input,
outputSchema: schemas.output,
userQueryTemplate: "Calculate: {{{expression}}}",
}, {
llmProvider: new YourLLMProvider(),
});
// 3. Use!
const result = await calculate({ expression: "10 * (5 + 3)" });
console.log(result.result); // 80
Core Concept: AiFunction
The central abstraction in LLume is the AiFunction. It represents a single, reusable task delegated to an LLM, defined by:
- Input Schema (Zod): Specifies the structure and types of the data required to execute the function. Ensures runtime validation.
- Output Schema (Zod): Defines the expected structure and types of the JSON object the LLM should return. Enables safe parsing and validation of the LLM's response.
- Prompt Templates (Handlebars):
userQueryTemplate: Constructs the specific user request using variables from the validated input.promptTemplate(Optional): Defines the overall structure of the prompt sent to the LLM, integrating theuserQuery, system instructions, and potentially the required JSON schema (derived from the output schema). A robust default template is provided if this is omitted.
- LLM Provider: An abstraction (
LLMProviderinterface) to interact with any LLM API (e.g., OpenAI, Anthropic, Gemini, or custom providers like the exampleAi0Provider). - Execution Context: A container (
ExecutionContext) for shared resources like theLLMProvider, caching mechanisms (CacheProvider), and event handlers (EventHandler). - Configuration: Fine-grained control over retries (attempts, delays, conditions), caching (TTL, enabling/disabling), and LLM-specific parameters.
Features
- ✨ Type Safety: Leverages Zod for rigorous compile-time and runtime validation of inputs and outputs.
- 📝 Structured Output: Enforces reliable JSON output from LLMs by automatically including JSON schema instructions in the default prompt.
- 🔧 Flexible Prompting: Utilizes Handlebars for dynamic prompt templating, allowing complex logic and full control over the prompt structure.
- 🔄 LLM Agnostic: Designed to work with any LLM through a simple
LLMProviderinterface with built-in caching capabilities. - 🔁 Robust Error Handling: Comprehensive error handling system with specific error types and automatic retries for transient failures.
- ⚡ Advanced Caching: Flexible caching system with pluggable providers and TTL support to optimize performance and costs.
- 📢 Event-Driven Architecture: Rich event system for monitoring, logging, and debugging the entire execution lifecycle.
- 🧩 Modular Design: Clean separation of concerns with dedicated modules for core functionality, LLM integration, parsing, caching, and events.
- 🚫 Defensive Programming: Built-in validation at every step with clear error messages and recovery strategies.
Table of Contents
- Installation
- Quick Start Example
- API Overview
- Advanced Usage
- Technology Stack
- Development & Testing
- Contributing
- License
Installation
npm install llume
# or
yarn add llume
# or
bun add llume
Note: LLume uses zod, handlebars, and zod-to-json-schema internally. You don't need to install them separately unless you use them directly in your project code.
Quick Start Example
import { z } from "zod";
import {
createAiFunction,
type ExecutionContext,
type AiFunctionDefinition,
type LLMProvider, // Interface for LLM interaction
type LLMResponse, // Expected response structure from LLMProvider
// Optional built-in cache:
InMemoryCacheProvider,
// Optional event handler example:
type EventHandler,
type ExecutionEvent,
ExecutionEventType
} from "llume";
// --- Example Implementations (Replace with your actual providers) ---
// 1. Mock LLM Provider (Replace with your actual LLM API client)
class MockLLMProvider implements LLMProvider {
async generate(prompt: string): Promise<LLMResponse> {
console.log("\n--- Mock LLM Received Prompt ---\n", prompt);
// Simulate response based on prompt analysis
let sentiment = "neutral";
let confidence = 0.5;
if (prompt.toLowerCase().includes("great") || prompt.toLowerCase().includes("easier")) {
sentiment = "positive";
confidence = 0.95;
} else if (prompt.toLowerCase().includes("bad") || prompt.toLowerCase().includes("difficult")) {
sentiment = "negative";
confidence = 0.85;
}
const rawOutput = JSON.stringify({ sentiment, confidence });
console.log("--- Mock LLM Sending Response ---\n", rawOutput);
return { rawOutput, modelInfo: { name: "MockLLM/v1" } };
}
}
// 2. Simple Console Event Handler (Optional: for observing execution)
class ConsoleEventHandler implements EventHandler {
publish(event: ExecutionEvent): void {
// Log specific events or all events
if (event.type === ExecutionEventType.PROMPT_COMPILATION_END) {
// Log less verbose info for this event
console.log(`[EVENT: ${event.type}] Compiled prompt generated.`);
} else if (event.type === ExecutionEventType.CACHE_HIT) {
console.log(`[EVENT: ${event.type}] Cache hit for key: ${event.data.cacheKey}`);
} else if (event.type === ExecutionEventType.CACHE_MISS) {
console.log(`[EVENT: ${event.type}] Cache miss for key: ${event.data.cacheKey}`);
}
else {
console.log(`[EVENT: ${event.type}]`, JSON.stringify(event.data, null, 2));
}
}
}
// --- Define the AiFunction ---
// 3. Define Input and Output Schemas using Zod
const SentimentInputSchema = z.object({
textToAnalyze: z.string().min(5, "Text must be at least 5 characters long"),
});
type SentimentInput = z.infer<typeof SentimentInputSchema>;
const SentimentOutputSchema = z.object({
sentiment: z.enum(["positive", "negative", "neutral"]).describe("The detected sentiment"),
confidence: z.number().min(0).max(1).describe("Confidence score (0.0 to 1.0)"),
});
type SentimentOutput = z.infer<typeof SentimentOutputSchema>;
// 4. Define the AiFunction structure
const analyzeSentimentDefinition: AiFunctionDefinition<
SentimentInput,
SentimentOutput
> = {
functionId: "sentimentAnalyzerV1", // Useful for logging/tracing
inputSchema: SentimentInputSchema,
outputSchema: SentimentOutputSchema,
// userQueryTemplate is mandatory: Uses Handlebars syntax {{variableName}}
userQueryTemplate: "Perform sentiment analysis on the following text: {{{textToAnalyze}}}",
// promptTemplate is optional: If omitted, a default template enforcing JSON output based on outputSchema is used.
// retryOptions: { maxAttempts: 2 }, // Optional: Default is 3 attempts
cacheOptions: { enabled: true, ttl: 60000 }, // Optional: Enable caching for 1 minute
};
// 5. Prepare Execution Context
const executionContext: ExecutionContext = {
llmProvider: new MockLLMProvider(),
// Optional: Add cache and event handler
cacheProvider: new InMemoryCacheProvider({ maxSize: 100 }), // Keep up to 100 items
eventHandler: new ConsoleEventHandler(),
};
// 6. Create the Executable Function
const analyzeSentiment = createAiFunction(analyzeSentimentDefinition, executionContext);
// 7. Execute the Function
async function runAnalysis() {
const input1: SentimentInput = {
textToAnalyze: "LLume is a great framework, it makes working with LLMs so much easier!",
};
const input2: SentimentInput = {
textToAnalyze: "This documentation could be clearer in some sections.",
};
try {
console.log("\n--- Running Analysis 1 ---");
const result1 = await analyzeSentiment(input1);
console.log("Analysis 1 Result:", result1); // Expected: { sentiment: 'positive', confidence: ~0.95 }
console.log("\n--- Running Analysis 1 (Again - Should hit cache) ---");
const result1_cached = await analyzeSentiment(input1);
console.log("Analysis 1 (Cached) Result:", result1_cached); // Should be identical to result1
console.log("\n--- Running Analysis 2 ---");
const result2 = await analyzeSentiment(input2);
console.log("Analysis 2 Result:", result2); // Expected: { sentiment: 'neutral' or 'negative', confidence: ... }
// Example of invalid input
console.log("\n--- Running Analysis 3 (Invalid Input) ---");
const invalidInput = { textToAn
Related Skills
node-connect
347.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
