ProtoMotions
ProtoMotions is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots.
Install / Use
/learn @NVlabs/ProtoMotionsREADME
ProtoMotions 3
A GPU-Accelerated Framework for Simulated Humanoids
</div>Overview
ProtoMotions3 is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots. Our mission is to provide a fast prototyping platform for various simulated humanoid learning tasks and environments—for researchers and practitioners in animation, robotics, and reinforcement learning—bridging efforts across communities.
Modularity, extensibility, and scalability are at the core of ProtoMotions3. It is community-driven and permissively licensed under the Apache-2.0 license.
Also check out MimicKit, our sibling repository for a lightweight framework for motion imitation learning.
<table> <tr> <td align="center"><img src="data/static/vault.gif" height="180"/></td> <td align="center"><img src="data/static/g1_tracker.gif" height="180"/></td> <td align="center"><img src="data/static/soma_regen.gif" height="180"/></td> </tr> <tr> <td align="center"><img src="data/static/wineglass.gif" height="180"/></td> <td align="center"><img src="data/static/real_robot.gif" height="180"/></td> <td align="center"><img src="data/static/real_robot_3.gif" height="180"/></td> </tr> </table>What You Can Do with ProtoMotions3
🏃 Large-Scale Motion Learning
Train your fully physically simulated character to learn motion skills from the entire public AMASS human animation dataset (40+ hours) within 12 hours on 4 A100s.
<p align="center"> <img src="data/static/smpl_mlp_094132.gif" alt="SMPL motion 1" height="180"> <img src="data/static/smpl_mlp_094428.gif" alt="SMPL motion 2" height="180"> <img src="data/static/smpl_mlp_095344.gif" alt="SMPL motion 3" height="180"> <img src="data/static/smpl_mlp_095848.gif" alt="SMPL motion 4" height="180"> <img src="data/static/smpl_mlp_095746.gif" alt="SMPL motion 5" height="180"> </p>📈 Scalable Multi-GPU Training
Scale training to even larger datasets with each GPU handling a subset of motions. For example, we have trained with 24 A100s with 13K motions on each GPU with the BONES dataset in SOMA skeleton format. Check out Quick Start and SEED BVH Data Preparation to play around with the dataset and pre-trained models today.
<p align="center"> <img src="data/static/soma_regen_markers.gif" height="180"> <img src="data/static/soma_regen_2.gif" height="180"> <img src="data/static/soma_regen_3.gif" height="180"> <img src="data/static/soma_regen_4.gif" height="180"> <img src="data/static/soma_regen_5.gif" height="180"> </p>🔄 One-Command Retargeting
Transfer (retarget) the entire AMASS dataset to your favorite robot with the built-in PyRoki-based optimizer—in one command.
<p align="center"> <img src="data/static/retargeting-g1.gif" alt="G1 retargeting" height="280"> </p>Note: As of v3, we use PyRoki for retargeting. Earlier versions used Mink.
🤖 Train Any Robot
Train your robot to perform AMASS motor skills in 12 hours, by just changing one command argument:
--robot-name=smpl → --robot-name=h1_2 and preparing retargeted motions (see here)
🔬 Sim2Sim Testing
One-click test (--simulator=isaacgym → --simulator=newton → --simulator=mujoco) of robot control policies on H1_2 or G1 in different physics engines (NVIDIA Newton, MuJoCo CPU). Policies shown below only use observations you could actually get from real hardware.
🤖 From Sim to Real
Train in simulation, deploy on real hardware. ProtoMotions trains one General Tracking Policy on entire BONES-SEED dataset (~142K motions) and transfers directly to the Unitree G1 humanoid robot zero-shot.
<p align="center"> <img src="data/static/g1_deploy_1.gif" alt="G1 deployment 1" height="240"> <img src="data/static/g1_deploy_2.gif" alt="G1 deployment 2" height="240"> <img src="data/static/real_robot_2.gif" alt="G1 real robot" height="240"> </p>Our deployment pipeline exports a single ONNX model (with observation computation baked in), so deployment frameworks only need to provide raw sensor signals — no need to rewrite obs functions or match training internals. We tested on the Unitree G1 via the brilliant RoboJuDo framework, adding just one policy file with no mandatory changes to RoboJuDo core.
📖 Full Deployment Tutorial — from data preparation to real robot, fully reproducible.
🎨 High-Fidelity Rendering
Test your policy in IsaacSim 5.0+, which allows you to load beautifully rendered Gaussian splatting backgrounds (with Omniverse NuRec — this rendered scene is not physically interact-able yet).
<p align="center"> <img src="data/static/g1-neurc.gif" alt="G1 NeuRec" height="280"> </p>🎬 Motion Authoring with Kimodo
With Kimodo (NVIDIA's text-to-motion generation model), generate any motion from a text prompt and use ProtoMotions to train a physics-based policy that performs the motion — for both the SOMA animation character and the Unitree G1 robot. Policies trained this way can be deployed directly on real hardware.
See Kimodo Data Preparation for how to convert Kimodo outputs to ProtoMotions format.
<p align="center"> <img src="data/static/aibm-vaulting.gif" alt="Vaulting" height="240"> <img src="data/static/g1_robot_walking.gif" alt="G1 robot walking" height="240"> </p>Image Credit: NVIDIA Human Motion Modeling Research
🏗️ Procedural Scene Generation
Procedurally generate many scenes for scalable Synthetic Data Generation (SDG): start from a seed motion set, use RL to adapt motions to augmented scenes.
<p align="center"> <img src="data/static/augmented_combined.gif" alt="Augmented Scenes and Motions" height="280"> </p>🎭 Generative Policies
Train a generative policy (e.g., MaskedMimic) that can autonomously choose its "move" to finish the task.
<table align="center"> <tr> <td align="center"><img src="data/static/maskedmimic_093152.gif" alt="MaskedMimic 1" height="180"/></td> <td align="center"><img src="data/static/maskedmimic_093229.gif" alt="MaskedMimic 2" height="180"/></td> <td align="center"><img src="data/static/maskedmimic_093313.gif" alt="MaskedMimic 3" height="180"/></td> </tr> <tr> <td align="center"><img src="data/static/maskedmimic_093430.gif" alt="MaskedMimic 4" height="180"/></td> <td align="center"><img src="data/static/maskedmimic_093406.gif" alt="MaskedMimic 5" height="180"/></td> <td align="center"><img src="data/static/maskedmimic_093349.gif" alt="MaskedMimic 6" height="180"/></td> </tr> </table>⛰️ Terrain Navigation
Train your robot to hike challenging terrains!
<p align="center"> <img src="data/static/smpl_terrain.gif" alt="SMPL Terrain" height="280"> </p>🎯 Custom Environments
Have a new task? Build it from modular components — no monolithic env class needed. Here's how the steering task is composed:
| Layer | File | What it does |
|-------|------|-------------|
| Control | steering_control.py | Manages task state (target direction, speed, facing). Periodically samples new heading targets. |
| Observation | obs/steering.py | Pure tensor kernel — transforms targets to robot-local frame → 5D feature vector. |
| Reward | rewards/task.py | compute_heading_velocity_rew — blends direction-matching (0.7) and facing-matching (0.3) rewards. |
| Experiment | steering/mlp.py | Wires components together as MdpComponent instances via context paths. |
Each piece is a standalone function or class — the experiment config bind
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
mentoring-juniors
Community-contributed instructions, agents, skills, and configurations to help you make the most of GitHub Copilot.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
