SecureShell
Plug-and-play terminal security layer for LLM agents. Drop-in gatekeeper that prevents dangerous shell commands. Works with OpenAI, Claude, Gemini & more.
Install / Use
/learn @divagr18/SecureShellQuality Score
Category
Development & EngineeringSupported Platforms
README
SecureShell
A plug-and-play security layer for LLMs and Agents that prevents dangerous command execution.
</div>SecureShell acts as "sudo for LLMs" - a drop-in zero-trust gatekeeper that evaluates every shell command before execution. It blocks hallucinated commands, prevents platform mismatches (e.g., Unix commands on Windows), and helps agents learn from mistakes.
Why SecureShell?
LLM agents with shell access can hallucinate dangerous commands like rm -rf / or dd if=/dev/zero. SecureShell solves this by:
- Zero-Trust Gatekeeper - Every command treated as untrusted until validated by independent gatekeeper
- Platform-Aware - Automatically blocks Unix commands on Windows (and vice versa)
- Risk Classification - GREEN/YELLOW/RED tiers with automatic handling
- Agent Learning - Clear feedback helps agents self-correct
- Drop-in Integration - Plug into LangChain, LangGraph, MCP, or use standalone
- Multi-LLM Support - Works with any LLM provider
Quick Start
TypeScript
npm install secureshell-ts
import { SecureShell, OpenAIProvider } from 'secureshell-ts';
const shell = new SecureShell({
provider: new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4.1-mini'
}),
template: 'development'
});
const result = await shell.execute(
'ls -la',
'List files to check project structure'
);
if (result.success) {
console.log(result.stdout);
} else {
console.error('Blocked:', result.gatekeeper_reasoning);
}
await shell.close();
Python
pip install secureshell
import os
from secureshell import SecureShell
from secureshell.providers.openai import OpenAI
shell = SecureShell(
template='development',
provider=OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
)
result = await shell.execute(
command='ls -la',
reasoning='List files to check project structure'
)
if result.success:
print(result.stdout)
else:
print(f"Blocked: {result.gatekeeper_reasoning}")
await shell.shutdown()
How It Works
When an agent tries to run a command:
- Risk Classification - Categorizes command as GREEN (safe), YELLOW (needs review), or RED (dangerous)
- Sandbox Check - Validates paths against allowed/blocked lists
- Platform Check - Ensures command compatibility with OS
- Zero-Trust Gatekeeper - LLM evaluates YELLOW/RED commands with full context
- Execution or Denial - Runs approved commands, blocks dangerous ones
- Agent Feedback - Returns detailed reasoning for learning
Example Flow:
Agent: "Run 'ls -la'"
SecureShell: [On Windows] DENY - "ls is Unix-only, use 'dir' instead"
Agent: "Run 'dir'"
SecureShell: ALLOW
Output: [directory listing]
Features
Security Templates
Drop-in security profiles for common scenarios - no configuration needed:
- Paranoid - Maximum security, blocks almost everything
- Production - Balanced for production deployments
- Development - Permissive for local development
- CI/CD - Optimized for automated pipelines
const shell = new SecureShell({ template: 'paranoid' });
Platform Awareness
Automatically detects OS and blocks incompatible commands:
// On Windows
await shell.execute('rm -rf file.txt', 'Delete file');
// Blocked: "rm is Unix-only, use 'del' on Windows"
// On Linux
await shell.execute('del file.txt', 'Delete file');
// Blocked: "del is Windows-only, use 'rm' on Unix"
LLM Providers
Plug in your preferred LLM for the zero-trust gatekeeper:
- OpenAI - GPT-4o, GPT-4.1-mini, GPT-3.5-turbo
- Anthropic - Claude 3.5 Sonnet, Claude 3.5 Haiku
- Google Gemini - Gemini 2.5 Flash, Gemini 1.5 Pro
- DeepSeek - deepseek-chat
- Groq - Llama 3.3, Mixtral
- Ollama - Local models (llama3, mistral, qwen)
- LlamaCpp - Local models via llama.cpp server
All providers support the same drop-in interface - just swap the provider.
Framework Integrations
Drop into your existing LLM framework without changing your code:
LangChain:
import { createSecureShellTool } from 'secureshell-ts';
const tool = createSecureShellTool(shell);
const agent = await createToolCallingAgent({ llm, tools: [tool], prompt });
LangGraph:
const tool = createSecureShellTool(shell);
const workflow = new StateGraph({...}).addNode('tools', toolNode);
MCP (Model Context Protocol):
import { createSecureShellMCPTool } from 'secureshell-ts';
const mcpTool = createSecureShellMCPTool(shell);
// Plug into Claude Desktop and other MCP clients
Real-World Use Cases
- AI DevOps Agents - Safely automate deployments and infrastructure tasks
- Code Assistants - Allow file operations and git commands with guardrails
- Data Processing - Execute data pipelines with oversight
- CI/CD Automation - Run build and test commands securely
- Local AI Assistants - Give Claude Desktop safe shell access
Documentation
- Getting Started - Installation and first steps
- Security Templates - Pre-built security profiles
- Zero-Trust Gatekeeper - How command evaluation works
- Risk Classification - Understanding risk tiers
- Platform Awareness - OS-specific handling
- Providers - LLM provider guides
- Integrations - Framework integration guides
- MCP Integration - Model Context Protocol setup
Examples
Complete working examples for both TypeScript and Python:
- Providers: OpenAI, Anthropic, Gemini, DeepSeek, Groq, Ollama, LlamaCpp
- Integrations: LangChain, LangGraph, MCP
- Use Cases: DevOps automation, code assistants, data processing
Browse the cookbook for runnable code.
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
License
MIT License - see LICENSE for details.
Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built for safety. Designed for autonomy.
Related Skills
healthcheck
353.3kHost security hardening and risk-tolerance configuration for OpenClaw deployments
imsg
353.3kiMessage/SMS CLI for listing chats, history, and sending messages via Messages.app.
node-connect
353.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
oracle
353.3kBest practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
