SkillAgentSearch skills...

Promptmask

Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless integration with your existing AI tools as a Python library / OpenAI SDK replacement / API Gatetway / Web Server.

Install / Use

/learn @cxumol/Promptmask

README

PromptMask

A local-first privacy layer for Large Language Models.

Cloud AI is smart but sacrifices privacy.
Local AI keeps your secret but is dumb.
What if we can combine the advantages of both sides?

Docker Image Publish to PyPI rtfd CI PyPI version PyPI Downloads License: MIT Python Versions Hugging Face

PromptMask ensures your private data never leaves your machines. It redacts and un-redacts sensitive data locally, so that only anonymized data is sent to third-party AI services.(*)

(*): "Local" in this project is used only for terminology distinction, differing from Cloud AI whose privacy protection is questionable. PromptMask is fully compatible with any remote LLM that you trust to process sensitive data.

Table of Contents

How It Works

The core principle is to use a trusted (local) model as a "privacy filter" for a powerful, remote model. The process is fully automated.

promptmask-workflow-digram

I wrote a blog post with more details on the why and how: How Not to Give AI Companies Your Secrets

Quickstart

Choosing Integration Method

Use this table to find the best way to integrate PromptMask into your workflow:

| | Existing OpenAI-compatible Tools | Direct Use / Custom Integration | | :--- | :--- | :--- | | Python Developers | from promptmask import OpenAIMasked as OpenAI <br/>A drop-in replacement for the openai.OpenAI client | from promptmask import PromptMask<br/>For granular control over mask/unmask operations | | General Users<br/>(No Python) | http://localhost:8000/gateway/v1/chat/completions <br/>Point your existing apps to promptmask-web's local endpoint | Web UI http://localhost:8000/ & Web API http://localhost:8000/docs <br/>For interactive testing or non-standard tools |

Prerequisites

  • A local LLM running with an OpenAI-compatible API endpoint.
    By default, PromptMask will attempt to connect to http://localhost:11434/v1 for masking sensitive information.

Ollama is a popular and straightforward option to run a local OpenAI-compatible LLM API. Other options include llama.cpp and vLLM.

Don't worry if you don't have a local LLM. PromptMask won't restrict a local address. You can always set a remote (trusted) endpoint as PromptMask's LLM API, such as a self-hosted GPU cloud or your trusted AI service provider.

Choosing a Model with Benchmarks

Choosing the right, capable local model can make data masking efforts twice as effective.

See the benchmark to select a competent model that fits within your hardware limitations. Alternatively, run your own benchmarks using python eval/s[1,2,3]_*.py.

Local LLM Model ID can be specified in your config file (more detail on configuration section)

[llm_api]
model = "qwen2.5:7b"

For General Users: local OpenAI-compatible API Gateway

Point any existing tool/app at the local gateway. It's the seamless way to add PromptMask layer without coding in Python.

  1. Install promptmask-web via pip:

    pip install "promptmask[web]"
    
  2. Run the web server:

    promptmask-web
    

    The console will display where the web server is launched. For example, # INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

  3. Use the gateway endpoint: Simply replace the official OpenAI API base URL with the local gateway's URL in your tool of choice.

    curl http://localhost:8000/gateway/v1/chat/completions \
      -H "Authorization: Bearer $YOUR_OPENAI_API_KEY" \
      -H "Content-Type: application/json" \
      -d '{
        "model": "gpt-99-ultra",
        "messages": [
          {
            "role": "user",
            "content": "My name is Ho Shih-Chieh and my appointment ID is Y1a2e87. I booked a dental appointment on Oct 26, but I have to cancel for a meeting. Please help me write a cancellation request email in French."
          }
        ]
      }'
    

    Your sensitive data (Ho Shih Chieh, Y1a2e87) will be redacted before being sent to the AI company, and then restored in the final response.

    Besides OpenAI, if you are using other cloud AI providers, such as Google Gemini, you need to add web.upstream_oai_api_base to your config file (more detail on configuration section)

    [web]
    upstream_oai_api_base = "https://generativelanguage.googleapis.com/v1beta/openai"
    

For Python Developers: OpenAIMasked

The OpenAIMasked class is a drop-in replacement for the official openai.OpenAI SDK.

  1. Install the base package:

    pip install promptmask
    
  2. Mask the OpenAI SDK in your code: The adapter automatically handles masking/unmasking for standard and streaming requests.

    Simply replace openai.OpenAI as follows:

    # from openai import OpenAI
    from promptmask import OpenAIMasked as OpenAI
    client = OpenAI()
    

    Full example:

    from promptmask import OpenAIMasked as OpenAI
    
    # openai.OpenAI, but with automatic privacy redaction.
    client = OpenAI(base_url="https://api.cloud-ai-service.example.com/v1") # reads OPENAI_API_KEY from env
    
    # non-stream
    response = client.chat.completions.create(
        model="gpt-100-pro",
        messages=[
            {"role": "user", "content": "My user ID is johndoe and my phone number is 4567890. Please help me write an application letter."}
        ]
    )
    print(response.choices[0].message.content) # access response.choices[0].message.original_content for original maksed one.
    
    # stream
    stream = client.chat.completions.create(
        model="gpt-101-turbo-mini",
        stream=True,
        messages=[
            {"role": "user", "content": "My patient, Jensen Huang (Patient ID: P123456789), is taking metformin and is experiencing nausea. What are the common side effects and management strategies?"}
        ]
    )
    
    # response chunks are unmasked on-the-fly
    for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="")
    

See more examples at examples/.

Configuration

To customize, create a promptmask.config.user.toml file in working directory. For example:

# promptmask.config.user.toml

[llm_api]
# Specify a particular local model to use for masking; model name depends on the inference engine; leave it empty to auto select the 1st one on /v1/models
model = "qwen2.5:7b"

# Define what data is considered sensitive.
[sensitive]
include = "personal ID and passwords" # Override the default one

# Change the default mask wrapper.
[mask_wrapper]
left = "__"
right = "__"

Check promptmask.config.default.toml for a full config file example.

Environment variables to override specific settings:

  • LOCALAI_API_BASE: The Base URL for your local LLM's API (e.g., http://192.168.1.234:11434/v1).
  • LOCALAI_API_KEY: The API key for your local LLM, if required.
<details> <summary>Configuration Priority Hierarchy</summary>

PromptMask is configured through a hierarchy of sources, from highest to lowest priority:

  1. LOCALAI_API_BASE and LOCALAI_API_KEY environment variables.
  2. A dict passed directly to the PromptMask constructor (config parameter).
  3. A path to a TOML file (config_file parameter).
  4. A promptmask.config.user.toml file in the current working directory.
  5. The packaged promptmask.config.default.toml.
</details>

Advanced Usage: PromptMask

For granular control, import PromptMask directly to perform masking and unmasking as separate steps.

import asyncio # PromptMask also runs syncrounously
from promptmask import PromptMask

async def main():
    masker = PromptMask()

    original_text = "Please process the visa application for Jensen Huang, passport number A12345678."

    # 1. Mask your secrets
    masked_text, mask_map = await masker.async_mask_str(original_text)

    print(f"Masked Text: {masked_text}")
    # Expected output (may vary): Masked Text: Please process the visa application for ${PERSON_NAME}, passport number ${PASSPORT_NUMBER}.
    
    print(f"Mask Map: {mask_map}")
    # Expected output: Mask Map: {"Jensen Huang": "$

Related Skills

View on GitHub
GitHub Stars106
CategoryDevelopment
Updated18h ago
Forks11

Languages

Python

Security Score

100/100

Audited on Apr 3, 2026

No findings