SkillAgentSearch skills...

Ccproxy

Build mods for Claude Code: Hook any request, modify any response, /model "with-your-custom-model", intelligent model routing using your logic or ours

Install / Use

/learn @starbaser/Ccproxy

README

ccproxy - Claude Code Proxy Version

Join starbased HQ for questions, sharing setups, and contributing to development.

ccproxy unlocks the full potential of your Claude Code by enabling Claude use alongside other LLM providers like OpenAI, Gemini, and Perplexity

It works by intercepting Claude Code's requests through a LiteLLM Proxy Server, allowing you to route different types of requests to the most suitable model - keep your unlimited Claude for standard coding, send large contexts to Gemini's 2M token window, route web searches to Perplexity, all while Claude Code thinks it's talking to the standard API.

⚠️⚠️ main Branch Status: As of 2026-02-05, the current release may not be stable for ALL Claude Code versions. Progress towards the next release candidate is ongoing, please consider the Discord before filing an issue.

⚠️ Note: While core functionality is complete, real-world testing and community input are welcomed. Please open an issue to share your experience, report bugs, or suggest improvements, or even better, submit a PR!

Installation

Important: ccproxy must be installed with LiteLLM in the same environment so that LiteLLM can import the ccproxy handler.

Recommended: Install as uv tool

# Install from PyPI
uv tool install claude-ccproxy --with 'litellm[proxy]'

# Or install from GitHub (latest)
uv tool install git+https://github.com/starbased-co/ccproxy.git --with 'litellm[proxy]'

This installs:

  • ccproxy command (for managing the proxy)
  • litellm bundled in the same environment (so it can import ccproxy's handler)

Alternative: Install with pip

# Install both packages in the same virtual environment
pip install git+https://github.com/starbased-co/ccproxy.git
pip install 'litellm[proxy]'

Note: With pip, both packages must be in the same virtual environment.

Verify Installation

ccproxy --help
# Should show ccproxy commands

which litellm
# Should point to litellm in ccproxy's environment

Usage

Run the automated setup:

# This will create all necessary configuration files in ~/.ccproxy
ccproxy install

tree ~/.ccproxy
# ~/.ccproxy
# ├── ccproxy.yaml
# └── config.yaml

# ccproxy.py is auto-generated when you start the proxy

# Start the proxy server
ccproxy start --detach

# Start Claude Code
ccproxy run claude
# Or add to your .zshrc/.bashrc
export ANTHROPIC_BASE_URL="http://localhost:4000"
# Or use an alias
alias claude-proxy='ANTHROPIC_BASE_URL="http://localhost:4000" claude'

Congrats, you have installed ccproxy! The installed configuration files are intended to be a simple demonstration, thus continuing on to the next section to configure ccproxy is recommended.

Configuration

ccproxy.yaml

This file controls how ccproxy hooks into your Claude Code requests and how to route them to different LLM models based on rules. Here you specify rules, their evaluation order, and criteria like token count, model type, or tool usage.

ccproxy:
  debug: true

  # OAuth token sources - map provider names to shell commands
  # Tokens are loaded at startup for SDK/API access outside Claude Code
  oat_sources:
    anthropic: "jq -r '.claudeAiOauth.accessToken' ~/.claude/.credentials.json"
    # Extended format with custom User-Agent:
    # gemini:
    #   command: "jq -r '.token' ~/.gemini/creds.json"
    #   user_agent: "MyApp/1.0"

  hooks:
    - ccproxy.hooks.rule_evaluator # evaluates rules against request (needed for routing)
    - ccproxy.hooks.model_router # routes to appropriate model
    - ccproxy.hooks.forward_oauth # forwards OAuth token to provider
    - ccproxy.hooks.extract_session_id # extracts session ID for LangFuse tracking
    # - ccproxy.hooks.capture_headers  # logs HTTP headers (with redaction)
    # - ccproxy.hooks.forward_apikey   # forwards x-api-key header
  rules:
    # example rules
    - name: token_count
      rule: ccproxy.rules.TokenCountRule
      params:
        - threshold: 60000
    - name: web_search
      rule: ccproxy.rules.MatchToolRule
      params:
        - tool_name: WebSearch
    # basic rules
    - name: background
      rule: ccproxy.rules.MatchModelRule
      params:
        - model_name: claude-3-5-haiku-20241022
    - name: think
      rule: ccproxy.rules.ThinkingRule

litellm:
  host: 127.0.0.1
  port: 4000
  num_workers: 4
  debug: true
  detailed_debug: true

When ccproxy receives a request from Claude Code, the rule_evaluator hook labels the request with the first matching rule:

  1. MatchModelRule: A request with model: claude-3-5-haiku-20241022 is labeled: background
  2. ThinkingRule: A request with thinking: {enabled: true} is labeled: think

If a request doesn't match any rule, it receives the default label.

config.yaml

LiteLLM's proxy configuration file is where your model deployments are defined. The model_router hook takes advantage of LiteLLM's model alias feature to dynamically rewrite the model field in requests based on rule criteria before LiteLLM selects a deployment. When a request is labeled (e.g., think), the hook changes the model from whatever Claude Code requested to the corresponding alias, allowing seamless redirection to different models.

The diagram shows how routing labels (⚡ default, 🧠 think, 🍃 background) map to their corresponding model deployments:

graph LR
    subgraph ccproxy_yaml["<code>ccproxy.yaml</code>"]
        R1["<div style='text-align:left'><code>rules:</code><br/><code>- name: default</code><br/><code>- name: think</code><br/><code>- name: background</code></div>"]
    end

    subgraph config_yaml["<code>config.yaml</code>"]
        subgraph aliases[" "]
            A1["<div style='text-align:left'><code>model_name: default</code><br/><code>litellm_params:</code><br/><code>&nbsp;&nbsp;model: claude-sonnet-4-5-20250929</code></div>"]
            A2["<div style='text-align:left'><code>model_name: think</code><br/><code>litellm_params:</code><br/><code>&nbsp;&nbsp;model: claude-opus-4-5-20251101</code></div>"]
            A3["<div style='text-align:left'><code>model_name: background</code><br/><code>litellm_params:</code><br/><code>&nbsp;&nbsp;model: claude-3-5-haiku-20241022</code></div>"]
        end

        subgraph models[" "]
            M1["<div style='text-align:left'><code>model_name: claude-sonnet-4-5-20250929</code><br/><code>litellm_params:</code><br/><code>&nbsp;&nbsp;model: anthropic/claude-sonnet-4-5-20250929</code></div>"]
            M2["<div style='text-align:left'><code>model_name: claude-opus-4-5-20251101</code><br/><code>litellm_params:</code><br/><code>&nbsp;&nbsp;model: anthropic/claude-opus-4-5-20251101</code></div>"]
            M3["<div style='text-align:left'><code>model_name: claude-3-5-haiku-20241022</code><br/><code>litellm_params:</code><br/><code>&nbsp;&nbsp;model: anthropic/claude-3-5-haiku-20241022</code></div>"]
        end
    end

    R1 ==>|"⚡ <code>default</code>"| A1
    R1 ==>|"🧠 <code>think</code>"| A2
    R1 ==>|"🍃 <code>background</code>"| A3

    A1 -->|"<code>alias</code>"| M1
    A2 -->|"<code>alias</code>"| M2
    A3 -->|"<code>alias</code>"| M3

    style R1 fill:#e6f3ff,stroke:#4a90e2,stroke-width:2px,color:#000

    style A1 fill:#fffbf0,stroke:#ffa500,stroke-width:2px,color:#000
    style A2 fill:#fff0f5,stroke:#ff1493,stroke-width:2px,color:#000
    style A3 fill:#f0fff0,stroke:#32cd32,stroke-width:2px,color:#000

    style M1 fill:#f8f9fa,stroke:#6c757d,stroke-width:1px,color:#000
    style M2 fill:#f8f9fa,stroke:#6c757d,stroke-width:1px,color:#000
    style M3 fill:#f8f9fa,stroke:#6c757d,stroke-width:1px,color:#000

    style aliases fill:#f0f8ff,stroke:#333,stroke-width:1px
    style models fill:#f5f5f5,stroke:#333,stroke-width:1px
    style ccproxy_yaml fill:#e8f4fd,stroke:#2196F3,stroke-width:2px
    style config_yaml fill:#ffffff,stroke:#333,stroke-width:2px

And the corresponding config.yaml:

# config.yaml
model_list:
  # aliases here are used to select a deployment below
  - model_name: default
    litellm_params:
      model: claude-sonnet-4-5-20250929

  - model_name: think
    litellm_params:
      model: claude-opus-4-5-20251101

  - model_name: background
    litellm_params:
      model: claude-3-5-haiku-20241022

  # deployments
  - model_name: claude-sonnet-4-5-20250929
    litellm_params:
      model: anthropic/claude-sonnet-4-5-20250929
      api_base: https://api.anthropic.com

  - model_name: claude-opus-4-5-20251101
    litellm_params:
      model: anthropic/claude-opus-4-5-20251101
      api_base: https://api.anthropic.com

  - model_name: claude-3-5-haiku-20241022
    litellm_params:
      model: anthropic/claude-3-5-haiku-20241022
      api_base: https://api.anthropic.com

litellm_settings:
  callbacks:
    - ccproxy.handler
general_settings:
  forward_client_headers_to_llm_api: true

See docs/configuration.md for more information on how to customize your Claude Code experience using ccproxy.

<!-- ## Extended Thinking --> <!-- Normally, when you send a message, Claude Code does a simple keyword scan for words/phrases like "think deeply" to determine whether or not to enable thinking, as well the size of the thinking token budget. [Simply including the word "ultrathink](https://claudelog.com/mechanics/ultrathink-plus-plus/) sets the thinking token budget to the maximum of `31999`. -->

Routing Rules

ccproxy provides several built-in rules as an homage to claude-code-router:

  • MatchModelRule: Routes based on the requested
View on GitHub
GitHub Stars192
CategoryDevelopment
Updated9h ago
Forks23

Languages

Python

Security Score

85/100

Audited on Mar 31, 2026

No findings