Promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless integration with your existing AI tools as a Python library / OpenAI SDK replacement / API Gatetway / Web Server.
Install / Use
/learn @cxumol/PromptmaskREADME
PromptMask
A local-first privacy layer for Large Language Models.
Cloud AI is smart but sacrifices privacy.
Local AI keeps your secret but is dumb.
What if we can combine the advantages of both sides?
PromptMask ensures your private data never leaves your machines. It redacts and un-redacts sensitive data locally, so that only anonymized data is sent to third-party AI services.(*)
(*): "Local" in this project is used only for terminology distinction, differing from Cloud AI whose privacy protection is questionable. PromptMask is fully compatible with any remote LLM that you trust to process sensitive data.
Table of Contents
- Table of Contents
- How It Works
- Quickstart
- Configuration
- Advanced Usage: PromptMask
- Web Server: WebUI & API
- Development & Contribution
- License
How It Works
The core principle is to use a trusted (local) model as a "privacy filter" for a powerful, remote model. The process is fully automated.

I wrote a blog post with more details on the why and how: How Not to Give AI Companies Your Secrets
Quickstart
Choosing Integration Method
Use this table to find the best way to integrate PromptMask into your workflow:
| | Existing OpenAI-compatible Tools | Direct Use / Custom Integration |
| :--- | :--- | :--- |
| Python Developers | from promptmask import OpenAIMasked as OpenAI <br/>A drop-in replacement for the openai.OpenAI client | from promptmask import PromptMask<br/>For granular control over mask/unmask operations |
| General Users<br/>(No Python) | http://localhost:8000/gateway/v1/chat/completions <br/>Point your existing apps to promptmask-web's local endpoint | Web UI http://localhost:8000/ & Web API http://localhost:8000/docs <br/>For interactive testing or non-standard tools |
Prerequisites
- A local LLM running with an OpenAI-compatible API endpoint.
By default,PromptMaskwill attempt to connect tohttp://localhost:11434/v1for masking sensitive information.
Ollama is a popular and straightforward option to run a local OpenAI-compatible LLM API. Other options include llama.cpp and vLLM.
Don't worry if you don't have a local LLM. PromptMask won't restrict a local address. You can always set a remote (trusted) endpoint as PromptMask's LLM API, such as a self-hosted GPU cloud or your trusted AI service provider.
Choosing a Model with Benchmarks
Choosing the right, capable local model can make data masking efforts twice as effective.
See the benchmark to select a competent model that fits within your hardware limitations. Alternatively, run your own benchmarks using python eval/s[1,2,3]_*.py.
Local LLM Model ID can be specified in your config file (more detail on configuration section)
[llm_api]
model = "qwen2.5:7b"
For General Users: local OpenAI-compatible API Gateway
Point any existing tool/app at the local gateway. It's the seamless way to add PromptMask layer without coding in Python.
-
Install promptmask-web via pip:
pip install "promptmask[web]" -
Run the web server:
promptmask-webThe console will display where the web server is launched. For example,
# INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) -
Use the gateway endpoint: Simply replace the official OpenAI API base URL with the local gateway's URL in your tool of choice.
curl http://localhost:8000/gateway/v1/chat/completions \ -H "Authorization: Bearer $YOUR_OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-99-ultra", "messages": [ { "role": "user", "content": "My name is Ho Shih-Chieh and my appointment ID is Y1a2e87. I booked a dental appointment on Oct 26, but I have to cancel for a meeting. Please help me write a cancellation request email in French." } ] }'Your sensitive data (
Ho Shih Chieh,Y1a2e87) will be redacted before being sent to the AI company, and then restored in the final response.Besides OpenAI, if you are using other cloud AI providers, such as Google Gemini, you need to add
web.upstream_oai_api_baseto your config file (more detail on configuration section)[web] upstream_oai_api_base = "https://generativelanguage.googleapis.com/v1beta/openai"
For Python Developers: OpenAIMasked
The OpenAIMasked class is a drop-in replacement for the official openai.OpenAI SDK.
-
Install the base package:
pip install promptmask -
Mask the OpenAI SDK in your code: The adapter automatically handles masking/unmasking for standard and streaming requests.
Simply replace
openai.OpenAIas follows:# from openai import OpenAI from promptmask import OpenAIMasked as OpenAI client = OpenAI()Full example:
from promptmask import OpenAIMasked as OpenAI # openai.OpenAI, but with automatic privacy redaction. client = OpenAI(base_url="https://api.cloud-ai-service.example.com/v1") # reads OPENAI_API_KEY from env # non-stream response = client.chat.completions.create( model="gpt-100-pro", messages=[ {"role": "user", "content": "My user ID is johndoe and my phone number is 4567890. Please help me write an application letter."} ] ) print(response.choices[0].message.content) # access response.choices[0].message.original_content for original maksed one. # stream stream = client.chat.completions.create( model="gpt-101-turbo-mini", stream=True, messages=[ {"role": "user", "content": "My patient, Jensen Huang (Patient ID: P123456789), is taking metformin and is experiencing nausea. What are the common side effects and management strategies?"} ] ) # response chunks are unmasked on-the-fly for chunk in stream: print(chunk.choices[0].delta.content or "", end="")
See more examples at examples/.
Configuration
To customize, create a promptmask.config.user.toml file in working directory. For example:
# promptmask.config.user.toml
[llm_api]
# Specify a particular local model to use for masking; model name depends on the inference engine; leave it empty to auto select the 1st one on /v1/models
model = "qwen2.5:7b"
# Define what data is considered sensitive.
[sensitive]
include = "personal ID and passwords" # Override the default one
# Change the default mask wrapper.
[mask_wrapper]
left = "__"
right = "__"
Check promptmask.config.default.toml for a full config file example.
Environment variables to override specific settings:
LOCALAI_API_BASE: The Base URL for your local LLM's API (e.g.,http://192.168.1.234:11434/v1).LOCALAI_API_KEY: The API key for your local LLM, if required.
PromptMask is configured through a hierarchy of sources, from highest to lowest priority:
LOCALAI_API_BASEandLOCALAI_API_KEYenvironment variables.- A
dictpassed directly to thePromptMaskconstructor (configparameter). - A path to a TOML file (
config_fileparameter). - A
promptmask.config.user.tomlfile in the current working directory. - The packaged
promptmask.config.default.toml.
Advanced Usage: PromptMask
For granular control, import PromptMask directly to perform masking and unmasking as separate steps.
import asyncio # PromptMask also runs syncrounously
from promptmask import PromptMask
async def main():
masker = PromptMask()
original_text = "Please process the visa application for Jensen Huang, passport number A12345678."
# 1. Mask your secrets
masked_text, mask_map = await masker.async_mask_str(original_text)
print(f"Masked Text: {masked_text}")
# Expected output (may vary): Masked Text: Please process the visa application for ${PERSON_NAME}, passport number ${PASSPORT_NUMBER}.
print(f"Mask Map: {mask_map}")
# Expected output: Mask Map: {"Jensen Huang": "$
Related Skills
gh-issues
347.0kFetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]
healthcheck
347.0kHost security hardening and risk-tolerance configuration for OpenClaw deployments
node-connect
347.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
oracle
347.0kBest practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
