SkillAgentSearch skills...

PHC

Official Implementation of the ICCV 2023 paper: Perpetual Humanoid Control for Real-time Simulated Avatars

Install / Use

/learn @ZhengyiLuo/PHC
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

MseeP.ai Security Assessment Badge

Perpetual Humanoid Control for Real-time Simulated Avatars

Official implementation of ICCV 2023 paper: "Perpetual Humanoid Control for Real-time Simulated Avatars". In this paper, we present a physics-based humanoid controller that achieves high-fidelity motion imitation and fail-statue recovery in the presence of noisy input (e.g. pose estimates from video or generated from language) and unexpected falls. No external forces is used.

[paper] [website] [Video]

<div float="center"> <img src="assets/isaaclab_teaser.gif" /> <img src="assets/phc_teaser.gif" /> <img src="assets/h1_phc.gif" /> </div>

Table of Contents

News 🚩

[August 21, 2025] Adding sample eval code in IsaacLab. PHC policy can directly be inferred in IsaacLab. python scripts/eval_in_isaaclab.py.

[December 10, 2024] Release process for generating offline dataset (PHC_Act) for offline RL or behavior cloning (developed by @kangnil). See docs/offline_dataset.md for more details.

[December 9, 2024] Release retargeting documentation (for retargeting to your own humanoids using SMPL data).

[October 8, 2024] Release support for Unitree H1 and G1 humanoid.

[August 30, 2024] Release support for SMPLX humanoid.

[Apirl 5, 2024] Upgrading to use SMPLSim to automatically create the SMPL humanoid. Please run pip install git+https://github.com/ZhengyiLuo/SMPLSim.git@master to SMPLSim. SMPLSim requires python 3.8; for python 3.7, please pip install git+https://github.com/ZhengyiLuo/SMPLSim.git@isaac37.

[Feburary 28, 2024] Releasing Auto-PMCP procedure to train single primitives; this procedure can lead to very high-performant imiatators without PNN, though it wouldn't have the failure state recovery capability. See the auto_pmcp_soft flag.

[Feburary 19, 2024] Releasing PHC+ model (100% success rate on AMASS) used in PULSE.

[Feburary 17, 2024] Fixed a bug when overhauling the system to Hydra. Please pull the newest version :).

[Feburary 1, 2024] Overhauling the config system to Hydra.

[January 8, 2024] Support for running inference without SMPL model.

[January 7, 2024] Release language-to-control demo (based on MDM).

[December 19, 2023] Release VR controller tracking code.

[December 14, 2023] Release webcam video-based control demo.

[October 31, 2023] Remove dependency on mujoco 210 and update to the newest mujoco version (for creating xml robot; no more downloads and direct install with pip install mujoco!). Updated amass_occlusion_v3 to 11313 sequences for training (was 11315). Updated requirement.txt.

[October 25, 2023] Training and Evaluation code released.

TODOs

  • [x] Add support for Unitree H1 & G1.

  • [x] Add support for smplx/h (fingers!!!).

  • [x] Release PHC+ model (100% success rate on AMASS) used in PULSE.

  • [x] Release language-based demo code.

  • [x] Release vr controller tracking code.

  • [x] Release video-based demo code.

  • [x] Additional instruction on Isaac Gym SMPL robot.

  • [x] Release training code.

  • [x] Release evaluation code.

Introduction

We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior in the presence of noisy input (e.g. pose estimates from video or generated from language) and unexpected falls. Our controller scales up to learning ten thousand motion clips without using any external stabilizing forces and learns to naturally recover from fail-state. Given reference motion, our controller can perpetually control simulated avatars without requiring resets. At its core, we propose the progressive multiplicative control policy (PMCP), which dynamically allocates new network capacity to learn harder and harder motion sequences. PMCP allows efficient scaling for learning from large-scale motion databases and adding new tasks, such as fail-state recovery, without catastrophic forgetting. We demonstrate the effectiveness of our controller by using it to imitate noisy poses from video-based pose estimators and language-based motion generators in a live and real-time multi-person avatar use case.

❗️❗️❗️Notice that the current released models used a different coordinate system as SMPL (with negative z as gravity direction), and the humanoid is modifed in a way such that it is facing positive x direction (instead of the original SMPL facing). This is reflected in a "up_right_start" flag in the humanoid robot (smpl_local_robot.py) configuration. This is done to make the humanoid's heading to be eailerly defined and flipping left and right easier, but would require further modification for converting back to SMPL (which is provided in the code). In the future I am working towards removing this modification.

❗️❗️❗️ Another notice is that while the MCP/Mixture of expert model is great for achieving high success rate, it is not absolutely necessary for the PHC to work. The PHC can work with a single primitive model and achieves high success rate; though it wouldn't have the failure state recovery capability.

Docs

Current Results on Cleaned AMASS (11313 Sequences)

All evaluation is done using the mean SMPL body pose and adjust the height, using the same evaluation protocal as in UHC. Noticed that different evaluation protocal will lead to different results, and Isaac gym itself can lead to (slightly) different results based on batch size/machine setup.

| Models | Succ | G-MPJPE | ACC | |----------------|:----:|:------------:|:----:| | PHC | 98.9% | 37.5 | 3.3 | | PHC-KP | 98.7% | 40.7 | 3.5 | | PHC+ in Pulse | 100% | 26.6 | 2.7 | | PHC-Prim (single primitive) | 99.9% | 25.9 | 2.3 | | PHC-Fut (using future) | 100% | 25.3 | 2.5 | | PHC-X-Prim (single primitive) | 99.9% | 24.7 | 3.6 |

Dependencies

To create the environment, follow the following instructions:

  1. Create new conda environment and install pytroch:
conda create -n isaac python=3.8
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirement.txt
  1. Download and setup Isaac Gym.

  2. Download SMPL paramters from SMPL and SMPLX. Put them in the data/smpl folder, unzip them into 'data/smpl' folder. For SMPL, please download the v1.1.0 version, which contains the neutral humanoid. Rename the files basicmodel_neutral_lbs_10_207_0_v1.1.0, basicmodel_m_lbs_10_207_0_v1.1.0.pkl, basicmodel_f_lbs_10_207_0_v1.1.0.pkl to SMPL_NEUTRAL.pkl, SMPL_MALE.pkl and SMPL_FEMALE.pkl. For SMPLX, please download the v1.1 version. Rename The file structure should look like this:


|-- data
    |-- smpl
        |-- SMPL_FEMALE.pkl
        |-- SMPL_NEUTRAL.pkl
        |-- SMPL_MALE.pkl
        |-- SMPLX_FEMALE.pkl
        |-- SMPLX_NEUTRAL.pkl
        |-- SMPLX_MALE.pkl

  1. Use the following script to download trained models and sample data.
bash download_data.sh

this will download amass_isaac_standing_upright_slim.pkl, which is a standing still pose for testing.

To evaluate with your own SMPL data, see the script scripts/data_process/convert_data_smpl.py. Pay speical attention to make sure the coordinate system is the same as the one used in simulaiton (with negative z as gravity direction).

Make sure you have the SMPL paramters properly setup by running the following scripts:

python scripts/vis/vis_motion_mj.py
python scripts/joint_monkey_smpl.py

The SMPL model is used to adjust the height the humanoid robot to avoid penetnration with the ground during data loading.

Evaluation

Viewer Shortcuts

| Keyboard | Function | | ---- | --- | | f | focus on humanoid | | Right click + WASD | change view port | | Shift + Right click + WASD | change view port fast | | r | reset episode | | j | apply large force to the humanoid | | l | record screenshot, press again to stop recording| | ; | cancel screen shot| | m | cancel termination based on imitation |

... more shortcut can be found in phc/env/tasks/base_task.py

Notes on rendering: I am using pyvirtualdisplay to record the video such that you can see all humanoids at the same time (default

Related Skills

View on GitHub
GitHub Stars1.2k
CategoryDevelopment
Updated1d ago
Forks119

Languages

Python

Security Score

85/100

Audited on Mar 27, 2026

No findings