OpenShell
OpenShell is the safe, private runtime for autonomous AI agents.
Install / Use
/learn @NVIDIA/OpenShellREADME
OpenShell
OpenShell is the safe, private runtime for autonomous AI agents. It provides sandboxed execution environments that protect your data, credentials, and infrastructure — governed by declarative YAML policies that prevent unauthorized file access, data exfiltration, and uncontrolled network activity.
OpenShell is built agent-first. The project ships with agent skills for everything from cluster debugging to policy generation, and we expect contributors to use them.
Alpha software — single-player mode. OpenShell is proof-of-life: one developer, one environment, one gateway. We are building toward multi-tenant enterprise deployments, but the starting point is getting your own environment up and running. Expect rough edges. Bring your agent.
Quickstart
Prerequisites
- Docker — Docker Desktop (or a Docker daemon) must be running.
Install
Binary (recommended):
curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh
From PyPI (requires uv):
uv tool install -U openshell
Both methods install the latest stable release by default. To install a specific version, set OPENSHELL_VERSION (binary) or pin the version with uv tool install openshell==<version>. A dev release is also available that tracks the latest commit on main.
Create a sandbox
openshell sandbox create -- claude # or opencode, codex, copilot
A gateway is created automatically on first use. To deploy on a remote host instead, pass --remote user@host to the create command.
The sandbox container includes the following tools by default:
| Category | Tools |
| ---------- | -------------------------------------------------------- |
| Agent | claude, opencode, codex, copilot |
| Language | python (3.13), node (22) |
| Developer | gh, git, vim, nano |
| Networking | ping, dig, nslookup, nc, traceroute, netstat |
For more details see https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base.
See network policy in action
Every sandbox starts with minimal outbound access. You open additional access with a short YAML policy that the proxy enforces at the HTTP method and path level, without restarting anything.
# 1. Create a sandbox (starts with minimal outbound access)
openshell sandbox create
# 2. Inside the sandbox — blocked
sandbox$ curl -sS https://api.github.com/zen
curl: (56) Received HTTP code 403 from proxy after CONNECT
# 3. Back on the host — apply a read-only GitHub API policy
sandbox$ exit
openshell policy set demo --policy examples/sandbox-policy-quickstart/policy.yaml --wait
# 4. Reconnect — GET allowed, POST blocked by L7
openshell sandbox connect demo
sandbox$ curl -sS https://api.github.com/zen
Anything added dilutes everything else.
sandbox$ curl -sS -X POST https://api.github.com/repos/octocat/hello-world/issues -d '{"title":"oops"}'
{"error":"policy_denied","detail":"POST /repos/octocat/hello-world/issues not permitted by policy"}
See the full walkthrough or run the automated demo:
bash examples/sandbox-policy-quickstart/demo.sh
How It Works
OpenShell isolates each sandbox in its own container with policy-enforced egress routing. A lightweight gateway coordinates sandbox lifecycle, and every outbound connection is intercepted by the policy engine, which does one of three things:
- Allows — the destination and binary match a policy block.
- Routes for inference — strips caller credentials, injects backend credentials, and forwards to the managed model.
- Denies — blocks the request and logs it.
| Component | Role | | ------------------ | -------------------------------------------------------------------------------------------- | | Gateway | Control-plane API that coordinates sandbox lifecycle and acts as the auth boundary. | | Sandbox | Isolated runtime with container supervision and policy-enforced egress routing. | | Policy Engine | Enforces filesystem, network, and process constraints from application layer down to kernel. | | Privacy Router | Privacy-aware LLM routing that keeps sensitive context on sandbox compute. |
Under the hood, all these components run as a K3s Kubernetes cluster inside a single Docker container — no separate K8s install required. The openshell gateway commands take care of provisioning the container and cluster.
Protection Layers
OpenShell applies defense in depth across four policy domains:
| Layer | What it protects | When it applies | | ---------- | --------------------------------------------------- | --------------------------- | | Filesystem | Prevents reads/writes outside allowed paths. | Locked at sandbox creation. | | Network | Blocks unauthorized outbound connections. | Hot-reloadable at runtime. | | Process | Blocks privilege escalation and dangerous syscalls. | Locked at sandbox creation. | | Inference | Reroutes model API calls to controlled backends. | Hot-reloadable at runtime. |
Policies are declarative YAML files. Static sections (filesystem, process) are locked at creation; dynamic sections (network, inference) can be hot-reloaded on a running sandbox with openshell policy set.
Providers
Agents need credentials — API keys, tokens, service accounts. OpenShell manages these as providers: named credential bundles that are injected into sandboxes at creation. The CLI auto-discovers credentials for recognized agents (Claude, Codex, OpenCode, Copilot) from your shell environment, or you can create providers explicitly with openshell provider create. Credentials never leak into the sandbox filesystem; they are injected as environment variables at runtime.
GPU Support (Experimental)
Experimental — GPU passthrough works on supported hosts but is under active development. Expect rough edges and breaking changes.
OpenShell can pass host GPUs into sandboxes for local inference, fine-tuning, or any GPU workload. Add --gpu when creating a sandbox:
openshell sandbox create --gpu --from [gpu-enabled-sandbox] -- claude
The CLI auto-bootstraps a GPU-enabled gateway on first use. GPU intent is also inferred automatically for community images with gpu in the name.
Requirements: NVIDIA drivers and the NVIDIA Container Toolkit must be installed on the host. The sandbox image itself must include the appropriate GPU drivers and libraries for your workload — the default base image does not. See the BYOC example for building a custom sandbox image with GPU support.
Supported Agents
| Agent | Source | Notes |
| ------------------------------------------------------------- | -------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- |
| Claude Code | base | Works out of the box. Provider uses ANTHROPIC_API_KEY. |
| OpenCode | base | Works out of the box. Provider uses OPENAI_API_KEY or OPENROUTER_API_KEY. |
| Codex | base | Works out of the box. Provider uses OPENAI_API_KEY. |
| GitHub Copilot CLI | base | Works out of the box. Provider uses GITHUB_TOKEN or COPILOT_GITHUB_TOKEN. |
| OpenClaw | Community | Launch with openshell sandbox create --from openclaw. |
| Ollama | Community | Launch with openshell sandbox create --from ollama. |
Key Commands
| Command | Description | | ---------------------------------------------------------- | ----------------------------------------------- | | `openshell s
Related Skills
node-connect
337.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
337.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.1kCommit, push, and open a PR
