InterMimic
[CVPR 2025 Highlight] InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions
Install / Use
/learn @Sirui-Xu/InterMimicREADME
🏠 Overview
<div align="center"> <img src="assets/teaser.png" width="100%" alt="InterMimic teaser"/> </div>InterMimic features one unified policy, spanning diverse full-body interactions with dynamic, heterogeneous objects—and it works out-of-the-box for both SMPL-X and Unitree G1 humanoids.
📹 Demo
<p align="center"> <img src="assets/InterMimic.gif" align="center" width=60% > </p>🔥 News
- [2026-02-09] 🚀 Multi-GPU training support is here!
- [2025-12-17] 🚀 Isaac Gym checkpoints are compatible with IsaacLab inference?! Check out the newly released implementation.
- [2025-12-15] IsaacLab support is underway! Data replay is ready—more coming in the next release ☕️
- [2025-12-07] 🚀 Release a data conversion pipeline for bringing InterAct into simulation. The processing code is available in the InterAct repository.
- [2025-06-10] Release the instruction for the student policy inference.
- [2025-06-03] Initial release of PSI and the processed data. Next release: teacher policy inference for dynamics-aware retargeting, and student policy inference.
- [2025-05-26] It's been a while! The student policy training pipeline has been released! The PSI and other data construction pipelines will follow soon.
- [2025-04-18] Release a checkpoint with high‑fidelity physics and enhanced contact precision.
- [2025-04-11] The training code for teacher policies is live—try training your own policy!
- [2025-04-05] We're excited by the overwhelming interest in humanoid robot support and are ahead of schedule in open-sourcing our Unitree-G1 integration—starting with a small demo with support for G1 with its original three-finger dexterous hands. Join us in exploring whole-body loco-manipulation with humanoid robots!
- [2025-04-04] InterMimic has been selected as a CVPR Highlight Paper 🏆. More exciting developments are on the way!
- [2025-03-25] We’ve officially released the codebase and checkpoint for teacher policy inference demo — give it a try! ☕️
📖 Getting Started
Dependencies
Isaac Gym environment
-
Create a dedicated conda environment (Python 3.8) and install PyTorch + repo deps:
conda create -n intermimic-gym python=3.8 conda activate intermimic-gym conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia pip install -r requirement.txt(Alternatively, start from environment.yml, though it includes some optional extras.)
-
Install Isaac Gym following NVIDIA’s instructions.
-
Fix the Isaac Gym shared-library lookup when using conda by exporting:
export LD_LIBRARY_PATH="$CONDA_PREFIX/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"Do this after every
conda activate intermimic-gymbefore launching Gym scripts; it ensureslibpython3.8.sois discoverable.
Isaac Lab environment
-
Install Isaac Lab separately by following the official guide and keep that environment isolated (typically via Isaac Sim’s python or the provided uv/conda env). Recommended version: Isaac Sim 5.1.0 with IsaacLab v2.3.1
-
Export
ISAACLAB_PATHonce per shell session so our helper scripts (which source$ISAACLAB_PATH/isaaclab.sh) can locate your install:export ISAACLAB_PATH=/path/to/your/IsaacLab -
Optional: if you plan to use the
--record-videoflag in our replay script, installimageio(andimageio-ffmpegfor MP4 support) inside the Isaac Lab Python environment:$ISAACLAB_PATH/isaaclab.sh -p -m pip install --upgrade imageio imageio-ffmpeg
Data
-
Download the dataset, unzip it, and move the extracted folder to
InterAct/OMOMO_new/. This build contains minor fixes to the original release, so your results may deviate slightly from those reported in the paper. -
🔥 We recommend processing the data using our InterAct to obtain richer HOI skills and higher-quality outputs than the original OMOMO dataset.
Data Replay
To replay the ground-truth data you now have two options:
Isaac Gym
sh isaacgym/scripts/data_replay.sh
Isaac Lab
./isaaclab/scripts/run_data_replay.sh --num-envs 8 --motion-dir InterAct/OMOMO_new
Helpful flags for the Isaac Lab demo:
--num-envs: sets bothcfg.num_envsandcfg.scene.num_envs.--headless: launches Isaac Sim without the viewer.--motion-dir: dataset directory relative to$INTERMIMIC_PATH.--no-playback: disables dataset playback so you can step physics manually.--record-video /path/to/video.mp4: captures RGB frames each step (requiresimageio).--video-fps: frame rate for--record-videocaptures (defaults to 30 FPS).
Teacher Policy Training
To train a teacher policy, execute the following commands:
sh isaacgym/scripts/train_teacher.sh
A higher‑fidelity simulation enough for low-dynamic interaction (trading off some efficiency for realism):
sh isaacgym/scripts/train_teacher_new.sh
How to enable PSI
Open the training config, for example, omomo_train_new.yaml. Set
physicalBufferSize: <integer greater than 1>
Student Policy Training
Download the data from teacher's retargeting and correction, to train a student policy with distillation, execute the following commands:
sh isaacgym/scripts/train_student.sh
To train with transformer network architecture:
sh isaacgym/scripts/train_student_transformer.sh
🔥 Multi-GPU Training
For faster training with multiple GPUs, we provide multi-GPU versions of the training scripts. These scripts use torchrun to launch distributed training across all available GPUs.
Teacher Policy (Multi-GPU)
# Uses all available GPUs by default
sh isaacgym/scripts/train_teacher_multigpu.sh
# High-fidelity simulation variant
sh isaacgym/scripts/train_teacher_new_multigpu.sh
Student Policy (Multi-GPU)
# MLP-based student policy
sh isaacgym/scripts/train_student_multigpu.sh
# Transformer-based student policy
sh isaacgym/scripts/train_student_transformer_multigpu.sh
Specifying the Number of GPUs
By default, all available GPUs are used. To specify a different number:
NUM_GPUS=2 sh isaacgym/scripts/train_teacher_multigpu.sh
Training Hyperparameters for Multi-GPU
With multi-GPU training, gradients are averaged across all GPUs, so each update step effectively processes more data. You may want to adjust the following in your training config (e.g., omomo.yaml):
mini_epochs: can be reducedminibatch_size: can be reduced
Teacher Policy Inference
We’ve released a checkpoint for one (out of 17) teacher policy on OMOMO, along with some sample data. To get started:
-
Download the checkpoints and place them in the current directory.
-
Then, run the following commands:
sh isaacgym/scripts/test_teacher.shFor quantitative evaluation with metrics (execution steps, pose errors, success rate):
sh isaacgym/scripts/eval_teache
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
research_rules
Research & Verification Rules Quote Verification Protocol Primary Task "Make sure that the quote is relevant to the chapter and so you we want to make sure that we want to have it identifie
