Dimos
Dimensional is the agentic operating system for physical space. Vibecode humanoids, quadrupeds, drones, and other hardware platforms in natural language and build multi-agent systems that work seamlessly with physical input (cameras, lidar, actuators).
Install / Use
/learn @dimensionalOS/DimosREADME
<a href="https://trendshift.io/repositories/23169" target="_blank"><img src="https://trendshift.io/api/badge/repositories/23169" alt="dimensionalOS%2Fdimos | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
<big><big>
Hardware • Installation • Agent CLI & MCP • Blueprints • Development
⚠️ Pre-Release Beta ⚠️
</big></big>
</div>Intro
Dimensional is the modern operating system for generalist robotics. We are setting the next-generation SDK standard, integrating with the majority of robot manufacturers.
With a simple install and no ROS required, build physical applications entirely in python that run on any humanoid, quadruped, or drone.
Dimensional is agent native -- "vibecode" your robots in natural language and build (local & hosted) multi-agent systems that work seamlessly with your hardware. Agents run as native modules — subscribing to any embedded stream, from perception (lidar, camera) and spatial memory down to control loops and motor drivers.
<table> <tr> <td align="center" width="50%"> <a href="docs/capabilities/navigation/native/index.md"><img src="assets/readme/navigation.gif" alt="Navigation" width="100%"></a> </td> <td align="center" width="50%"> <img src="assets/readme/perception.png" alt="Perception" width="100%"> </td> </tr> <tr> <td align="center" width="50%"> <h3><a href="docs/capabilities/navigation/native/index.md">Navigation and Mapping</a></h3> SLAM, dynamic obstacle avoidance, route planning, and autonomous exploration — via both DimOS native and ROS<br><a href="https://x.com/stash_pomichter/status/2010471593806545367">Watch video</a> </td> <td align="center" width="50%"> <h3>Perception</h3> Detectors, 3d projections, VLMs, Audio processing </td> </tr> <tr> <td align="center" width="50%"> <a href="docs/capabilities/agents/readme.md"><img src="assets/readme/agentic_control.gif" alt="Agents" width="100%"></a> </td> <td align="center" width="50%"> <img src="assets/readme/spatial_memory.gif" alt="Spatial Memory" width="100%"> </td> </tr> <tr> <td align="center" width="50%"> <h3><a href="docs/capabilities/agents/readme.md">Agentive Control, MCP</a></h3> "hey Robot, go find the kitchen"<br><a href="https://x.com/stash_pomichter/status/2015912688854200322">Watch video</a> </td> <td align="center" width="50%"> <h3>Spatial Memory</a></h3> Spatio-temporal RAG, Dynamic memory, Object localization and permanence<br><a href="https://x.com/stash_pomichter/status/1980741077205414328">Watch video</a> </td> </tr> </table>Hardware
<table> <tr> <td align="center" width="20%"> <h3>Quadruped</h3> <img width="245" height="1" src="assets/readme/spacer.png"> </td> <td align="center" width="20%"> <h3>Humanoid</h3> <img width="245" height="1" src="assets/readme/spacer.png"> </td> <td align="center" width="20%"> <h3>Arm</h3> <img width="245" height="1" src="assets/readme/spacer.png"> </td> <td align="center" width="20%"> <h3>Drone</h3> <img width="245" height="1" src="assets/readme/spacer.png"> </td> <td align="center" width="20%"> <h3>Misc</h3> <img width="245" height="1" src="assets/readme/spacer.png"> </td> </tr> <tr> <td align="center" width="20%"> 🟩 <a href="docs/platforms/quadruped/go2/index.md">Unitree Go2 pro/air</a><br> 🟥 <a href="dimos/robot/unitree/b1">Unitree B1</a><br> </td> <td align="center" width="20%"> 🟨 <a href="docs/platforms/humanoid/g1/index.md">Unitree G1</a><br> </td> <td align="center" width="20%"> 🟨 <a href="docs/capabilities/manipulation/readme.md">Xarm</a><br> 🟨 <a href="docs/capabilities/manipulation/readme.md">AgileX Piper</a><br> </td> <td align="center" width="20%"> 🟧 <a href="dimos/robot/drone/README.md">MAVLink</a><br> 🟧 <a href="dimos/robot/drone/README.md">DJI Mavic</a><br> </td> <td align="center" width="20%"> 🟥 <a href="https://github.com/dimensionalOS/openFT-sensor">Force Torque Sensor</a><br> </td> </tr> </table> <br> <div align="right"> 🟩 stable 🟨 beta 🟧 alpha 🟥 experimental </div>[!IMPORTANT] 🤖 Direct your favorite Agent (OpenClaw, Claude Code, etc.) to AGENTS.md and our CLI and MCP interfaces to start building powerful Dimensional applications.
Installation
Interactive Install
curl -fsSL https://raw.githubusercontent.com/dimensionalOS/dimos/main/scripts/install.sh | bash
See
scripts/install.sh --helpfor non-interactive and advanced options.
Manual System Install
To set up your system dependencies, follow one of these guides:
Full system requirements, tested configs, and dependency tiers: docs/requirements.md
Python Install
Quickstart
uv venv --python "3.12"
source .venv/bin/activate
uv pip install 'dimos[base,unitree]'
# Replay a recorded quadruped session (no hardware needed)
# NOTE: First run will show a black rerun window while ~75 MB downloads from LFS
dimos --replay run unitree-go2
# Install with simulation support
uv pip install 'dimos[base,unitree,sim]'
# Run quadruped in MuJoCo simulation
dimos --simulation run unitree-go2
# Run humanoid in simulation
dimos --simulation run unitree-g1-sim
# Control a real robot (Unitree quadruped over WebRTC)
export ROBOT_IP=<YOUR_ROBOT_IP>
dimos run unitree-go2
Featured Runfiles
| Run command | What it does |
|-------------|-------------|
| dimos --replay run unitree-go2 | Quadruped navigation replay — SLAM, costmap, A* planning |
| dimos --replay --replay-dir unitree_go2_office_walk2 run unitree-go2-temporal-memory | Quadruped temporal memory replay |
| dimos --simulation run unitree-go2-agentic-mcp | Quadruped agentic + MCP server in simulation |
| dimos --simulation run unitree-g1 | Humanoid in MuJoCo simulation |
| dimos --replay run drone-basic | Drone video + telemetry replay |
| dimos --replay run drone-agentic | Drone + LLM agent with flight skills (replay) |
| dimos run demo-camera | Webcam demo — no hardware needed |
| dimos run keyboard-teleop-xarm7 | Keyboard teleop with mock xArm7 (requires dimos[manipulation] extra) |
| dimos --simulation run unitree-go2-agentic-ollama | Quadruped agentic with local LLM (requires Ollama + ollama serve) |
Full blueprint docs: docs/usage/blueprints.md
Agent CLI and MCP
The dimos CLI manages the full lifecycle — run blueprints, inspect state, interact with agents, and call skills via MCP.
dimos run unitree-go2-agentic-mcp --daemon # Start in background
dimos status # Check what's running
dimos log -f # Follow logs
dimos agent-send "explore the room" # Send agent a command
dimos mcp list-tools # List available MCP skills
dimos mcp call relative_move --arg forward=0.5 # Call a skill directly
dimos stop # Shut down
Full CLI reference: docs/usage/cli.md
Usage
Use DimOS as a Library
See below a simple robot connection module that sends streams of continuous cmd_vel to the robot and receives color_image to a simple Listener module. DimOS Modules are subsystems on a robot that communicate with other modules using standardized messages.
import threading, time, numpy as np
from dimos.core.blueprints import autoconnect
from dimos.core.core import rpc
from dimos.core.module import Module
from dimos.core.stream import In, Out
from dimos.msgs.geometry_msgs import Twist
from dimos.msgs.sensor_msgs import Image, ImageFormat
class RobotConnection(Module):
cmd_vel: In[Twist]
color_image: Out[Image]
@rpc
def start(self):
threading.Thread(target=self._image_loop, daemon=True).start()
def _image_loop(self):
while True:
img = Image.from_numpy(
np.zeros((120, 160, 3), np.uint8),
format=ImageFormat.RGB,
frame_id="camera_optical",
)
self.color_image.publish(img)
time.sleep(0.2)
class Listener(Module):
color_image: In[Image]
@rpc
def start(self):
self.color_imag
Related Skills
node-connect
337.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
337.3kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.2kCommit, push, and open a PR
