SkillAgentSearch skills...

AIProxySwift

Swift client for AI providers. Can make requests straight to the provider or proxied through our API key protection backend

Install / Use

/learn @lzell/AIProxySwift
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

About

Use this library to adopt AI APIs in your app. Swift clients for the following providers are included:

  • OpenAI
  • Gemini
  • Anthropic
  • Stability AI
  • DeepL
  • Together AI
  • Replicate
  • ElevenLabs
  • Fal
  • Groq
  • Perplexity
  • Mistral
  • EachAI
  • OpenRouter
  • DeepSeek
  • Fireworks AI
  • Brave

Your initialization code determines whether requests go straight to the provider or are protected through the AIProxy backend.

We only recommend making requests straight to the provider during prototyping and for BYOK use-cases.

Requests that are protected through AIProxy have five levels of security applied to keep your API key secure and your AI bill predictable:

  • Certificate pinning
  • DeviceCheck verification
  • Split key encryption
  • Per user rate limits
  • Per IP rate limits

Installation

Installation using Xcode

  1. From within your Xcode project, select File > Add Package Dependencies

    <img src="https://github.com/lzell/AIProxySwift/assets/35940/d44698a0-34e6-434b-b501-390254a14439" alt="Add package dependencies" width="420">
  2. Punch github.com/lzell/aiproxyswift into the package URL bar, and select the 'main' branch as the dependency rule. Alternatively, you can choose specific releases if you'd like to have finer control of when your dependency gets updated.

    <img src="https://github.com/lzell/AIProxySwift/assets/35940/fd76b588-5e19-4d4d-9748-8db3fd64df8e" alt="Set package rule" width="720">

Installation using cocoapods

Add to your podfile:

pod "AIProxy"

Then, from shell:

pod install

How to configure the package for use with AIProxy

We recommend using the AIProxy option useStableID to rate limit usage across an app store user's account on multiple devices. To enable this, please first add support for iCloud's key-value storage:

  1. Tap on your project in Xcode's project tree
  2. Select your target in the secondary sidebar
  3. Tap on Signing & Capabilities > Add Capability > iCloud
  4. Check the 'Key-Value storage' checkbox

During your app's launch, call AIProxy.configure. Using this method, you can specify:

  • the log level that you'd like to see in your Xcode console from the AIProxy lib
  • whether to print request/response bodies to Xcode's console, which is useful for debugging or contributing to the library
  • whether to resolve DNS queries using Cloudflare's DoT (recommended)
  • whether to use stable identifiers as client IDs (recommended)

In a SwiftUI app, call AIProxy.configure in your app's composition root:

import AIProxy

@main
struct MyApp: App {
    init() {
        AIProxy.configure(
            logLevel: .debug,
            printRequestBodies: false,
            printResponseBodies: false,
            resolveDNSOverTLS: true,
            useStableID: true
        )
    }
    // ...
}

In a UIKit app, call AIProxy.configure in applicationDidFinishLaunching:

import AIProxy

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {

    var window: UIWindow?

    func application(_ application: UIApplication,
                     didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        AIProxy.configure(
            logLevel: .debug,
            printRequestBodies: false,
            printResponseBodies: false,
            resolveDNSOverTLS: true,
            useStableID: true
        )
        // ...
        return true
    }
    // ...
}

How to configure the AIProxy backend for use with your project

See the AIProxy integration video. Note that this is not required if you are shipping an app where the customers provide their own API keys (known as BYOK for "bring your own key").

If you are shipping an app using a personal or company API key, we highly recommend setting up AIProxy as an alternative to building, monitoring, and maintaining your own backend.

How to update the package

  • If you set the dependency rule to main during installation, then you can ensure the package is up to date by right clicking on the package and selecting 'Update Package'

    <img src="https://github.com/lzell/AIProxySwift/assets/35940/aeee0ab2-362b-4995-b9ca-ff4e1dd04f47" alt="Update package version" width="720">
  • If you selected a version-based rule, inspect the rule in the 'Package Dependencies' section of your project settings:

    <img src="https://github.com/lzell/AIProxySwift/assets/35940/ca788c4c-ac38-4d9d-bb4f-928a9487f6eb" alt="Update package rule" width="720">

    Once the rule is set to include the release version that you'd like to bring in, Xcode should update the package automatically. If it does not, right click on the package in the project tree and select 'Update Package'.

How to contribute to the package

Your additions to AIProxySwift are welcome! I like to develop the library while working in an app that depends on it:

  1. Fork the repo
  2. Clone your fork
  3. Open your app in Xcode
  4. Remove AIProxySwift from your app (since this is likely referencing a remote lib)
  5. Go to File > Add Package Dependencies, and in the bottom left of that popup there is a button "Add local"
  6. Tap "Add local" and then select the folder where you cloned AIProxySwift on your disk.

If you do that, then you can modify the source to AIProxySwift right from within your Xcode project for your app. Once you're happy with your changes, open a PR here.

Example usage

OpenAI

Get a non-streaming chat completion from OpenAI:

    import AIProxy

    /* Uncomment for BYOK use cases */
    // let openAIService = AIProxy.openAIDirectService(
    //     unprotectedAPIKey: "your-openai-key"
    // )

    /* Uncomment for all other production use cases */
    // let openAIService = AIProxy.openAIService(
    //     partialKey: "partial-key-from-your-developer-dashboard",
    //     serviceURL: "service-url-from-your-developer-dashboard"
    // )

    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-5.2",
        messages: [
            .system(content: .text("You are a friendly assistant")),
            .user(content: .text("hello world"))
        ],
        reasoningEffort: .noReasoning
    )

    do {
        let response = try await openAIService.chatCompletionRequest(
            body: requestBody,
            secondsToWait: 120
        )
        print(response.choices.first?.message.content ?? "")
    } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("Received \(statusCode) status code with response body: \(responseBody)")
    } catch {
        print("Could not create OpenAI chat completion: \(error)")
    }

How to make a buffered chat completion to OpenAI with extended timeout

This is useful for o1 and o3 models.

    import AIProxy

    /* Uncomment for BYOK use cases */
    // let openAIService = AIProxy.openAIDirectService(
    //     unprotectedAPIKey: "your-openai-key"
    // )

    /* Uncomment for all other production use cases */
    // let openAIService = AIProxy.openAIService(
    //     partialKey: "partial-key-from-your-developer-dashboard",
    //     serviceURL: "service-url-from-your-developer-dashboard"
    // )

    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-5.2",
        messages: [
          .developer(content: .text("You are a coding assistant")),
          .user(content: .text("Build a ruby service that writes latency stats to redis on each request"))
        ],
        reasoningEffort: .high
    )

    do {
        let response = try await openAIService.chatCompletionRequest(
            body: requestBody,
            secondsToWait: 300
        )
        print(response.choices.first?.message.content ?? "")
    } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
    } catch let err as URLError where err.code == URLError.timedOut {
        print("Request to OpenAI for a reasoning request timed out")
    } catch let err as URLError where [.notConnectedToInternet, .networkConnectionLost].contains(err.code) {
        print("Could not complete OpenAI reasoning request. Please check your internet connection")
    } catch {
        print("Could not complete OpenAI reasoning request: \(error)")
    }

Get a streaming chat completion from OpenAI:

    import AIProxy

    /* Uncomment for BYOK use cases */
    // let openAIService = AIProxy.openAIDirectService(
    //     unprotectedAPIKey: "your-openai-key"
    // )

    /* Uncomment for all other production use cases */
    // let openAIService = AIProxy.openAIService(
    //     partialKey: "partial-key-from-your-developer-dashboard",
    //     serviceURL: "service-url-from-your-developer-dashboard"
    // )

    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-5.2",
        messages: [.user(content: .text("hello world"))]
        reasoningEffort: .noReasoning
    )

    do {
        let stream = try await openAIService.streamingChatCompletionRequest(
            body: requestBody,
            secondsToWait: 60
        )
        for try await chunk in stream {
            print(chunk.choices.first?.delta.content ?? "")
        }
    } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("
View on GitHub
GitHub Stars426
CategoryDevelopment
Updated3d ago
Forks94

Languages

Swift

Security Score

100/100

Audited on Mar 30, 2026

No findings