HoST
[RSS 2025 Best Systems Paper Finalist] 💐Official implementation of "Learning Humanoid Standing-up Control across Diverse Postures"
Install / Use
/learn @InternRobotics/HoSTREADME
HoST: Humanoid Standing-up Control
This is the official PyTorch implementation of the RSS conference paper "Learning Humanoid Standing-up Control across Diverse Postures" by
Tao Huang, Junli Ren, Huayi Wang, Zirui Wang, Qingwei Ben, Muning Wen, Xiao Chen, Jianan Li, Jiangmiao Pang
<p align="left"> <img width="98%" src="docs/teaser.png" style="box-shadow: 1px 1px 6px rgba(0, 0, 0, 0.3); border-radius: 4px;"> </p>📑 Table of Contents
- 🔥 News
- 📝 TODO List
- 🛠️ Installation Instructions
- 🤖 Run HoST on Unitree G1
- 🧭 Extend HoST to Other Humanoid Robots
- ✉️ Contact
- 🏷️ License
- 🎉 Acknowledgments
- 📝 Citation
🔥 News
- [2025-06] HoST is selected as a Best Systems Paper Finalist at RSS 2025!
- [2025-05] DroidUp is now supported by HoST! Code is coming soon.
- [2025-05] High Torque Mini Pi is now supported by HoST! Code is available.
- [2025-04] We release traning code, evaluation scripts, and visualization tools.
- [2025-04] HoST was accepted to RSS 2025!
- [2025-02] We release the paper and demos of HoST.
📝 TODO List
- [x] Training code of Unitree G1 across prone postures.
- [x] Training code of Unitree H1.
- [ ] Joint training of supine and prone postures.
- [ ] Joint training over all terrains.
🛠️ Installation Instructions
Clone this repository:
git clone https://github.com/OpenRobotLab/HoST.git
cd HoST
Create a conda environment:
conda env create -f conda_env.yml
conda activate host
Install pytorch 1.10 with cuda-11.3:
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
Download and install Isaac Gym:
cd isaacgym/python && pip install -e .
Install rsl_rl (PPO implementation) and legged gym:
cd rsl_rl && pip install -e . && cd ..
cd legged_gym && pip install -e . && cd ..
Erorr Catching
Regarding potential installation errors, please refer to this document for solutions.
🤖 Run HoST on Unitree G1
Overview of Main Simulation Motions
<table style="width: 100%; border-collapse: collapse; margin: -5px -0px -12px 0px;"> <tr> <td align="center" style="width: 24%; padding: 2px;"> <img src="docs/results_ground_10000.gif" alt="Ground" style="width: 98%; max-width: 100%;"/><br/> <span style="font-size: 0.9em;">Ground</span> </td> <td align="center" style="width: 24%; padding: 2px;"> <img src="docs/results_platform_12000.gif" alt="Platform" style="width: 98%; max-width: 100%;"/><br/> <span style="font-size: 0.9em;">Platform</span> </td> <td align="center" style="width: 24%; padding: 2px;"> <img src="docs/results_wall_4000.gif" alt="Platform" style="width: 98%; max-width: 100%;"/><br/> <span style="font-size: 0.9em;">Wall</span> </td> <td align="center" style="width: 24%; padding: 2px;"> <img src="docs/results_slope_8000.gif" alt="Slope" style="width: 98%; max-width: 100%;"/><br/> <span style="font-size: 0.9em;">Slope</span> </td> </tr> </table>Policy Training
Train standing-up policies over different terrains:
python legged_gym/scripts/train.py --task g1_${terrain} --run_name test_g1 # [ground, platform, slope, wall]
After training, you may play the resulted checkpoints:
python legged_gym/scripts/play.py --task g1_${terrain} --checkpoint_path ${/path/to/ckpt.pt} # [ground, platform, slope, wall]
Policy Evaluation
We also provide the evaluation scripts to record success rate, feet movement distance, motion smoothness, and consumed energy:
python legged_gym/scripts/eval/eval_${terrain}.py --task g1_${terrain} --checkpoint_path ${/path/to/ckpt.pt} # [ground, platform, slope, wall]
Domain randomization is applied during the evaluation to make the results more generalizable.
Motion Visualization
<p align="left"> <img width="98%" src="docs/motion_vis.png" style="box-shadow: 1px 1px 6px rgba(0, 0, 0, 0.3); border-radius: 4px;"> </p>First, run the following command to collect produced motion:
python legged_gym/scripts/visualization/motion_collection.py --task g1_${terrain} --checkpoint_path ${/path/to/ckpt.pt} # [ground, platform, slope, wall]
Second, plot the 3D trajectories of motion keyframes:
python legged_gym/scripts/visualization/trajectory_hands_feet.py --terrain ${terrain} # [ground, platform, slope, wall]
python legged_gym/scripts/visualization/trajectory_head_pelvis.py --terrain ${terrain} # [ground, platform, slope, wall]
Train from Prone Postures
<table style="width: 100%; border-collapse: collapse; margin: -5px -0px -0px 0px;"> <tr> <td align="center" style="width: 33%; padding: 3px;"> <img src="docs/results_leftside.gif" alt="Ground" style="width: 98%; max-width: 100%; height: auto; box-shadow: 2px 2px 6px rgba(0, 0, 0, 0.3); border-radius: 4px;"/><br/> <span style="font-size: 0.9em;">Left-side Lying</span> </td> <td align="center" style="width: 33%; padding: 3px;"> <img src="docs/results_prone.gif" alt="Platform" style="width: 98%; max-width: 100%; height: auto; box-shadow: 2px 2px 6px rgba(0, 0, 0, 0.3); border-radius: 4px;"/><br/> <span style="font-size: 0.9em;">Prone</span> </td> <td align="center" style="width: 33%; padding: 3px;"> <img src="docs/results_rightside.gif" alt="Slope" style="width: 98%; max-width: 100%; height: auto; box-shadow: 2px 2px 6px rgba(0, 0, 0, 0.3); border-radius: 4px;"/><br/> <span style="font-size: 0.9em;">Right-side Lying</span> </td> </tr> </table>We also support the training from prone postures:
python legged_gym/scripts/train.py --task g1_ground_prone --run_name test_g1_ground_prone
The learned policies can also handle side-lying postures. However, when training from posture postures, harder constraints on hip joints and are necessary to prevent violent motions. This issue make the feasibility of joint training from prone and supine postures unclear currently. Address it would be valuable in the future.
🧭 Extend HoST to Other Humanoid Robots: Tips
Lessons Learned from Unitree H1 and H1-2
<p align="left"> <img width="98%" src="docs/results_sim_h1_h12.png" style="box-shadow: 1px 1px 6px rgba(0, 0, 0, 0.3); border-radius: 4px;"> </p> To try other robots, these steps should be followed to work the algorithm:- Add keyframes in urdf: It is suggested to add the same keyframes (including keypoints around ankles) as ours to strengthen the compatibility with new robots. These keyframes are designed for reward computation.
- Pulling force: ~60% gravity of the robot. Note that we have two torso link (one real, one virtual) in G1's urdf, so the force will be multiplied by 2 during training. Besides, you may modify the condition of applying force, e.g., remove the base orientation condition.
- Height for curriculum: ~70% height of the robot.
- [Height for stage division](./legged_gym/legged_gym/envs/g1/g1_c
