AnyLanguageModel
An API-compatible, drop-in replacement for Apple's Foundation Models framework with support for custom language model providers.
Install / Use
/learn @huggingface/AnyLanguageModelREADME
AnyLanguageModel
A Swift package that provides a drop-in replacement for Apple's Foundation Models framework with support for custom language model providers. All you need to do is change your import statement:
- import FoundationModels
+ import AnyLanguageModel
struct WeatherTool: Tool {
let name = "getWeather"
let description = "Retrieve the latest weather information for a city"
@Generable
struct Arguments {
@Guide(description: "The city to fetch the weather for")
var city: String
}
func call(arguments: Arguments) async throws -> String {
"The weather in \(arguments.city) is sunny and 72°F / 23°C"
}
}
let model = SystemLanguageModel.default
let session = LanguageModelSession(model: model, tools: [WeatherTool()])
let response = try await session.respond {
Prompt("How's the weather in Cupertino?")
}
print(response.content)
To observe or control tool execution, assign a delegate on the session:
actor ToolExecutionObserver: ToolExecutionDelegate {
func didGenerateToolCalls(_ toolCalls: [Transcript.ToolCall], in session: LanguageModelSession) async {
print("Generated tool calls: \(toolCalls)")
}
func toolCallDecision(
for toolCall: Transcript.ToolCall,
in session: LanguageModelSession
) async -> ToolExecutionDecision {
// Return .stop to halt after tool calls, or .provideOutput(...) to bypass execution.
// This is a good place to ask the user for confirmation (for example, in a modal dialog).
.execute
}
func didExecuteToolCall(
_ toolCall: Transcript.ToolCall,
output: Transcript.ToolOutput,
in session: LanguageModelSession
) async {
print("Executed tool call: \(toolCall)")
}
}
let session = LanguageModelSession(model: model, tools: [WeatherTool()])
session.toolExecutionDelegate = ToolExecutionObserver()
Features
Supported Providers
- [x] Apple Foundation Models
- [x] Core ML models
- [x] MLX models
- [x] llama.cpp (GGUF models)
- [x] Ollama HTTP API
- [x] Anthropic Messages API
- [x] Google Gemini API
- [x] OpenAI Chat Completions API
- [x] OpenAI Responses API
- [x] Open Responses (multi-provider Responses API–compatible endpoints)
Requirements
- Swift 6.1+
- iOS 17.0+ / macOS 14.0+ / visionOS 1.0+ / Linux
[!IMPORTANT] A bug in Xcode 26 may cause build errors when targeting macOS 15 / iOS 18 or earlier (e.g.
Conformance of 'String' to 'Generable' is only available in macOS 26.0 or newer). As a workaround, build your project with Xcode 16. For more information, see issue #15.
Installation
Add this package to your Package.swift:
dependencies: [
.package(url: "https://github.com/huggingface/AnyLanguageModel", from: "0.8.0")
]
Package Traits
AnyLanguageModel uses Swift 6.1 traits to conditionally include heavy dependencies, allowing you to opt-in only to the language model backends you need. This results in smaller binary sizes and faster build times.
Available traits:
CoreML: Enables Core ML model support (depends onhuggingface/swift-transformers)MLX: Enables MLX model support (depends onml-explore/mlx-swift-lm)Llama: Enables llama.cpp support (requiresmattt/llama.swift)
By default, no traits are enabled. To enable specific traits, specify them in your package's dependencies:
// In your Package.swift
dependencies: [
.package(
url: "https://github.com/huggingface/AnyLanguageModel.git",
from: "0.8.0",
traits: ["CoreML", "MLX"] // Enable CoreML and MLX support
)
]
[!IMPORTANT] Due to a Swift Package Manager bug, dependency resolution may fail when you enable traits, producing the error "exhausted attempts to resolve the dependencies graph." To work around this issue, add the underlying dependencies for each trait directly to your package:
dependencies: [ .package( url: "https://github.com/huggingface/AnyLanguageModel.git", from: "0.8.0", traits: ["CoreML", "MLX", "Llama"] ), .package(url: "https://github.com/huggingface/swift-transformers", from: "1.0.0"), // CoreML .package(url: "https://github.com/ml-explore/mlx-swift-lm", from: "2.25.5"), // MLX .package(url: "https://github.com/mattt/llama.swift", from: "2.0.0"), // Llama ]Include only the dependencies that correspond to the traits you enable. For more information, see issue #135.
Using Traits in Xcode Projects
Xcode doesn't yet provide a built-in way to declare package dependencies with traits.
As a workaround,
you can create an internal Swift package that acts as a shim,
exporting the AnyLanguageModel module with the desired traits enabled.
Your Xcode project can then add this internal package as a local dependency.
For example, to use AnyLanguageModel with MLX support in an Xcode app project:
1. Create a local Swift package (in root directory containing Xcode project):
mkdir -p Packages/MyAppKit
cd Packages/MyAppKit
swift package init
2. Specify AnyLanguageModel package dependency
(in Packages/MyAppKit/Package.swift):
// swift-tools-version: 6.1
import PackageDescription
let package = Package(
name: "MyAppKit",
platforms: [
.macOS(.v14),
.iOS(.v17),
.visionOS(.v1),
],
products: [
.library(
name: "MyAppKit",
targets: ["MyAppKit"]
)
],
dependencies: [
.package(
url: "https://github.com/huggingface/AnyLanguageModel",
from: "0.4.0",
traits: ["MLX"]
)
],
targets: [
.target(
name: "MyAppKit",
dependencies: [
.product(name: "AnyLanguageModel", package: "AnyLanguageModel")
]
)
]
)
3. Export the AnyLanguageModel module
(in Sources/MyAppKit/Export.swift):
@_exported import AnyLanguageModel
4. Add the local package to your Xcode project:
Open your project settings,
navigate to the "Package Dependencies" tab,
and click "+" → "Add Local..." to select the Packages/MyAppKit directory.
Your app can now import AnyLanguageModel with MLX support enabled.
[!TIP] For a working example of package traits in an Xcode app project, see chat-ui-swift.
API Credentials and Security
When using third-party language model providers like OpenAI, Anthropic, or Google Gemini, you must handle API credentials securely.
[!CAUTION] Never hardcode API credentials in your app. Malicious actors can reverse‑engineer your application binary or observe outgoing network requests (for example, on a compromised device or via a debugging proxy) to extract embedded credentials. There have been documented cases of attackers successfully exfiltrating API keys from mobile apps and racking up thousands of dollars in charges.
Here are two approaches for managing API credentials in production apps:
Bring Your Own Key (BYO)
Users provide their own API keys, which are stored securely in the system Keychain and sent directly to the provider in API requests.
Security considerations:
- Keychain data is encrypted using hardware-backed keys (protected by the Secure Enclave on supported devices)
- An attacker would need access to a running process to intercept credentials
- TLS encryption protects credentials in transit on the network
- Users can only compromise their own keys, not other users' keys
Trade-offs:
- Apple App Review has often rejected apps using this model
- Reviewers may be unable to test functionality — even with provided credentials
- Apple may require in-app purchase integration for usage credits
- Some users may find it inconvenient to obtain and enter API keys
Proxy Server
Instead of connecting directly to the provider, route requests through your own authenticated service endpoint. API credentials are stored securely on your server, never in the client app.
Authenticate users with OAuth 2.1 or similar, issuing short-lived, scoped bearer tokens for client requests. If an attacker extracts tokens from your app, they're limited in scope and expire automatically.
Security considerations:
- API keys never leave your server infrastructure
- Client tokens can be scoped (e.g., rate-limited, feature-restricted)
- Client tokens can be revoked or expired independently
- Compromised tokens have limited blast radius
Trade-offs:
- Additional infrastructure complexity (server, authentication, monitoring)
- Operational costs (hosting, maintenance, support)
- Network latency from additional hop
Fortunately, there are platforms and services that simplify proxy implementation, handling authentication, rate limiting, and billing for you.
[!TIP] For development and testing, it's fine to use API keys from environment variables. Just make sure production builds use one of the secure approac
Related Skills
openhue
339.3kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
339.3kElevenLabs text-to-speech with mac-style say UX.
weather
339.3kGet current weather and forecasts via wttr.in or Open-Meteo
tweakcc
1.5kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
