Hipocap
An opensource DevSecOps Layer for your AI agent. Governance, Custom Guard Rails and Observablity at one platform.
Install / Use
/learn @hipocap/HipocapREADME
HipoCap is an AI security and observability platform that protects your LLM applications from prompt injection attacks while providing comprehensive observability.
- [x] 🛡️ AI Security - Multi-stage defense pipeline
- [x] Prompt Guard - Fast input analysis using specialized models to detect prompt injection attempts
- [x] LLM Analysis - Deep structured analysis of function calls and results
- [x] Quarantine Analysis - Two-stage infection simulation to detect sophisticated attacks
- [x] Threat Detection - 14 threat categories (S1-S14) covering all major attack vectors
- [x] Custom Shields - Prompt-based blocking rules for direct prompt injection detection
- [x] 🔐 Governance & RBAC - Role-based access control
- [x] Function-level permissions and access control
- [x] Policy-driven security rules
- [x] Function chaining rules to prevent unauthorized sequences
- [x] User role management and audit trails
- [x] 📊 Observability - OpenTelemetry-native tracing
- [x] Automatic instrumentation for OpenAI, Anthropic, LangChain, and more
- [x] Real-time trace viewing with security analysis integration
- [x] SQL access to all trace data
- [x] Custom dashboards and metrics
- [x] Evaluations framework for testing and validation
Demo
<p align="center"> <img alt="HipoCap demo screenshot" src="./images/demo.png" width="700"> </p>Quick Start (5 minutes)
Get HipoCap running locally in minutes.
Prerequisites
- Docker and Docker Compose installed
- Git
Step 1: Clone and Setup
git clone https://github.com/hipocap/hipocap
cd hipocap
Step 2: Create Environment File
Create a .env file in the project root with minimal required variables:
Linux/Mac:
cat > .env << 'EOF'
# Database (required)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
HIPOCAP_DB_NAME=hipocap_second
# ClickHouse (required)
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=clickhouse_password
# Security tokens (required - generate random strings)
SHARED_SECRET_TOKEN=$(openssl rand -hex 32)
AEAD_SECRET_KEY=$(openssl rand -hex 32)
HIPOCAP_API_KEY=$(openssl rand -hex 16)
NEXTAUTH_SECRET=$(openssl rand -hex 32)
# LLM Configuration (optional - for security analysis)
OPENAI_API_KEY=your-openai-key-here
OPENAI_BASE_URL=https://openrouter.ai/api/v1
OPENAI_MODEL=gpt-4o-mini
EOF
Windows (PowerShell):
@"
# Database (required)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=postgres
HIPOCAP_DB_NAME=hipocap_second
# ClickHouse (required)
CLICKHOUSE_USER=default
CLICKHOUSE_PASSWORD=clickhouse_password
# Security tokens (required - replace with random strings)
SHARED_SECRET_TOKEN=replace-with-random-32-char-string
AEAD_SECRET_KEY=replace-with-random-32-char-string
HIPOCAP_API_KEY=replace-with-random-16-char-string
NEXTAUTH_SECRET=replace-with-random-32-char-string
# LLM Configuration (optional - for security analysis)
OPENAI_API_KEY=your-openai-key-here
OPENAI_BASE_URL=https://openrouter.ai/api/v1
OPENAI_MODEL=gpt-4o-mini
"@ | Out-File -FilePath .env -Encoding utf8
Or use the example file:
# If .env.example exists
cp .env.example .env
# Then edit .env and replace placeholder values
Step 3: Start Services
docker compose -f docker-compose.yml up -d
This starts all services:
- Frontend → http://localhost:3000
- HipoCap Server → http://localhost:8006
- Observability Backend → http://localhost:8000
- PostgreSQL → localhost:5433
- ClickHouse → localhost:8123
- Quickwit → http://localhost:7280
Step 4: Access the Dashboard
Open your browser and go to http://localhost:3000
You'll be prompted to sign up and create an account. Once logged in, you can:
- View traces and observability data
- Configure security policies
- Set up API keys for your applications
Creating Your First Policy
After logging in, navigate to the Policies section to create your first security policy:
- Access Policies: Click on "Policies" in the sidebar under the "Monitoring" section, or navigate to
/project/[your-project-id]/policies - Create Policy: Click the "Create Policy" button to open the policy creation form
- Configure Policy: Set up your policy with:
- Policy Key: A unique identifier (e.g.,
default,strict,permissive) - Roles: Define user roles and their permissions
- Functions: Specify which functions are allowed/blocked
- Severity Rules: Configure threat detection thresholds
- Function Chaining: Set rules for function call sequences
- Output Restrictions: Control what data can be returned
- Prompts: You can add custom prompts for LLM and Quarantine Analysis.
- Policy Key: A unique identifier (e.g.,

Note: You'll need to create at least one policy before using HipoCap's security analysis features. The policy key you create will be referenced in your code when calling
client.analyze().
Creating Your First Shield
Shields are prompt-based blocking rules designed specifically for Direct Prompt Injection detection. They allow you to define custom rules for what to block and what not to block based on prompt descriptions.
- Access Shields: Click on "Shields" in the sidebar under the "Monitoring" section, or navigate to
/project/[your-project-id]/shields - Create Shield: Click the "Create Shield" button to open the shield creation form
- Configure Shield: Set up your shield with:
- Shield Key: A unique identifier (e.g.,
jailbreak,data-extraction,system-prompt-leak) - Name: A human-readable name for the shield
- Description: Optional description of the shield's purpose
- Prompt Description: Description of the type of prompts this shield should analyze
- What to Block: Detailed description of content patterns to block
- What Not to Block: Exceptions or content that should be allowed
- Shield Key: A unique identifier (e.g.,

Note: Shields are optimized for direct prompt injection scenarios where you need to analyze user input before it reaches your LLM. The shield key you create will be referenced in your code when calling
client.shield().
Step 5: Check Service Status
# View all services
docker compose -f docker-compose.yml ps
# View logs
docker compose -f docker-compose.yml logs -f
# Stop services
docker compose -f docker-compose.yml down
Python Integration
Install HipoCap SDK
pip install 'hipocap[all]'
This installs the HipoCap Python SDK and all instrumentation packages.
Basic Example
import os
from hipocap import Hipocap, observe
from openai import OpenAI
# Initialize HipoCap
client = Hipocap.initialize(
project_api_key=os.environ.get("HIPOCAP_API_KEY"),
base_url="http://localhost", # Observability server
http_port=8000,
grpc_port=8001,
hipocap_base_url="http://localhost:8006", # Security server
hipocap_user_id=os.environ.get("HIPOCAP_USER_ID")
)
# OpenAI client (automatically instrumented)
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@observe() # This function will be traced
def get_user_data(user_id: str):
"""Retrieve user data - automatically traced."""
return {"user_id": user_id, "email": f"user{user_id}@example.com"}
@observe()
def process_user_request():
user_query = "What's my email?"
user_id = "123"
# Execute function
user_data = get_user_data(user_id)
# Analyze for security threats
result = client.analyze(
function_name="get_user_data",
function_result=user_data,
function_args={"user_id": user_id},
user_query=user_query,
user_role="user",
input_analysis=True, # Stage 1: Prompt Guard
llm_analysis=True, # Stage 2: LLM Analysis
policy_key="default"
)
# Only return if safe
if not result.get("safe_to_use"):
return {"error": "Blocked by security policy", "reason": result.get("reason")}
return user_data
result = process_user_request()
print(result)
Shield Example (Direct Prompt Injection Detection)
Shields are designed specifically for Direct Prompt Injection detection. They allow you to analyze any text content (user input, emails, documents, etc.) before it reaches your LLM:
from hipocap import Hipocap
client = Hipocap.initialize(
project_api_key="your-api-key-here",
base_url="http://localhost", # Observability server
http_port=8000,
grpc_port=8001,
hipocap_base_url="http://localhost:8006", # Security server
hipocap_user_id="your-user-id-here"
)
# Interactive shield analysis
while True:
content = input("Enter content to analyze: ")
result = client.shield(
shield_key="jailbreak",
content=content
)
print(result["decision"]) # "BLOCK" or "ALLOW"
print(result.get("reason")) # Optional reason if require_reason=True
Shield Features:
- Analyze any text input (not just function calls)
- Custom blocking rules per shield
- Fast decision-making for real-time protection
- Optional reasoning for blocked content
Note: Create your shields in the dashboard first (see "Creating Your First Shield" above). Shields are ideal for protecting against direct prompt injection attacks where malicious instructions are embedded in user input.
Environment Variables for Python
expor
Related Skills
healthcheck
326.5kHost security hardening and risk-tolerance configuration for OpenClaw deployments
node-connect
326.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
prose
326.5kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
frontend-design
80.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
