ChatClientKit
No description available
Install / Use
/learn @Lakr233/ChatClientKitREADME
ChatClientKit
ChatClientKit is a Swift Package that unifies remote LLM APIs, local MLX models, and Apple Intelligence into a single, streaming-first interface. It ships with an ergonomic request DSL, rich tool-calling support, and a flexible error-collection pipeline so you can embed conversational AI in macOS, iOS, and Catalyst apps without rewriting clients per provider.
When Building Fails
ChatClientKit relies on mlx-swift-lm, which may not always provide up-to-date releases. If you encounter build issues, add mlx-swift-lm directly to your workspace and override this package by using the main branch.
Highlights
- One
ChatServiceprotocol powering Remote (OpenAI-style), MLX, and Apple Intelligence clients with interchangeable APIs. - Responses API ready via
RemoteResponsesChatClientfor OpenAI/OpenRouter/v1/responseswith streaming chunks. - Streaming built-in via
AsyncSequenceand Server-Sent Events, including structured reasoning, image, and tool-call payloads. - Swift-first ergonomics thanks to
ChatRequestBuilderandChatMessageBuilder, letting you compose prompts declaratively. - Tooling aware: Tool call routing, request supplements, and custom headers/body fields for provider-specific knobs.
- Observability ready with the included
ChatServiceErrorCollectorand the sharedLoggerdependency.
Requirements
- Swift 6.0 toolchain (Xcode 16 beta or Swift 6 nightly).
- macOS 14+ for MLX builds; iOS 17+/macCatalyst 17+ for runtime targets.
- Apple Intelligence integrations require the Foundation models runtime (iOS 26/macOS 26 SDKs) and run-time availability checks.
Installation (Swift Package Manager)
// In Package.swift
dependencies: [
.package(url: "https://github.com/your-org/ChatClientKit.git", branch: "main")
],
targets: [
.target(
name: "YourApp",
dependencies: [
.product(name: "ChatClientKit", package: "ChatClientKit")
]
)
]
If you vend the Logger dependency separately, keep the relative path declared in
Package.swiftor adjust to your organization’s logging package.
Usage
Configure a remote model
import ChatClientKit
let client = RemoteCompletionsChatClient(
model: "gpt-4o-mini",
baseURL: "https://api.openai.com/v1/chat/completions",
apiKey: ProcessInfo.processInfo.environment["OPENAI_API_KEY"],
additionalHeaders: ["X-Client": "ChatClientKit-Demo"]
)
let response = try await client.chatCompletion {
ChatRequest.system("You are a precise release-notes assistant.")
ChatRequest.user("Summarize the last test run.")
ChatRequest.temperature(0.2)
}
print(response.choices.first?.message.content ?? "")
Stream responses
let stream = try await client.streamingChatCompletion {
ChatRequest.system("You stream thoughts and final answers.")
ChatRequest.user("Walk me through the onboarding checklist.")
}
for try await event in stream {
switch event {
case let .chatCompletionChunk(chunk):
let text = chunk.choices.first?.delta.content ?? ""
print(text, terminator: "")
case let .tool(call):
// Dispatch tool call to your executor.
triggerTool(call)
}
}
Call the Responses API (OpenAI/OpenRouter)
let responsesClient = RemoteResponsesChatClient(
model: "google/gemini-3-pro-preview",
baseURL: "https://openrouter.ai/api",
path: "/v1/responses",
apiKey: ProcessInfo.processInfo.environment["OPENROUTER_API_KEY"]
)
// Collect the entire response
let chunks = try await responsesClient.chatChunks {
ChatRequest.messages {
.system(content: .text("Answer concisely."))
.user(content: .text("What is the capital of France?"))
}
ChatRequest.temperature(0.3)
}
let text = ChatResponse(chunks: chunks).text
// Or stream chunks as they arrive
let responseStream = try await responsesClient.streamingChat {
ChatRequest.messages {
.user(content: .text("Compose a haiku about integration tests."))
}
}
for try await chunk in responseStream {
if case let .text(delta) = chunk { print(delta, terminator: "") }
}
Generate images with OpenRouter
let imageClient = RemoteCompletionsChatClient(
model: "google/gemini-2.5-flash-image",
baseURL: "https://openrouter.ai/api",
path: "/v1/chat/completions",
apiKey: ProcessInfo.processInfo.environment["OPENROUTER_API_KEY"],
additionalHeaders: [
"HTTP-Referer": "https://github.com/FlowDown/ChatClientKit",
"X-Title": "ChatClientKit Demo"
],
additionalBodyField: [
"output_modalities": ["image", "text"],
"modalities": ["image", "text"]
]
)
let imageResponse = try await imageClient.chat {
ChatRequest.system("You are a professional icon designer. You must return an image.")
ChatRequest.user("Generate a simple black-and-white line-art cat icon.")
}
let imageData = imageResponse.images.first?.data // base64 image payload
Run local MLX models
let localClient = MLXChatClient(
url: URL(fileURLWithPath: "/Models/Qwen2.5-7B-Instruct")
)
let reply = try await localClient.chatCompletion {
ChatRequest.system("You are a privacy-first assistant running fully on-device.")
ChatRequest.user("Draft a release tweet for ChatClientKit.")
ChatRequest.maxCompletionTokens(512)
}
Tap into Apple Intelligence
if #available(iOS 26, macOS 26, macCatalyst 26, *) {
let aiClient = AppleIntelligenceChatClient()
let aiResponse = try await aiClient.chatCompletion {
ChatRequest.user("Find the action items from the last note.")
}
}
Build chat requests declaratively
let makeStandupRequest = {
ChatRequest {
ChatRequest.model("gpt-4o-realtime-preview")
ChatRequest.messages {
.system(content: .text("Keep answers concise."))
.user(parts: [
.text("Turn these bullet points into a standup update."),
.fileData(.init(filename: "status.md", mimeType: "text/markdown", data: statusData))
])
}
ChatRequest.tools([
.jsonSchema(
name: "create_ticket",
description: "Open a Jira ticket",
schema: ticketSchema
)
])
}
}
Collect and surface errors
do {
_ = try await client.chatCompletion { /* ... */ }
} catch {
let message = await client.errorCollector.getError() ?? error.localizedDescription
presentAlert(message)
}
Architecture at a Glance
ChatServiceis the core abstraction;RemoteCompletionsChatClient,MLXChatClient, andAppleIntelligenceChatClientconform to it.RemoteClientuses Server-Sent Events via theServerEventhelper target plusRemoteChatStreamProcessorto decode incremental JSON chunks.MLXClientwraps MLX, MLXLLM, and MLXVLM to coordinate weights, vision inputs, and request queues for on-device inference.FoundationModelsbridges Apple Intelligence personas, prompt building, and tool proxies withLanguageModelSession.RequestBuilderandSupplementkeep request construction expressive while enabling additional metadata injection per provider.
Development
- Build:
swift build - Test:
swift test - Update dependencies:
swift package update - When running MLX locally, make sure your model folder matches the expected MLX layout (
tokenizer.json,config.json, weights shards, etc.).
License
ChatClientKit is available under the MIT License.
Related Skills
node-connect
340.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
340.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.2kCommit, push, and open a PR
