PBHC
Official Implementation of "KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills"
Install / Use
/learn @TeleHuman/PBHCREADME
Demo
News
- [2025-10] Release support for general motion tracking.
- [2025-09] KungfuBot is accepted by NeurIPS 2025!
- [2025-06] We release the code and paper for PBHC.
Contents
About
This is the official implementation of the paper KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills, supporting general motion tracking of the paper KungfuBot2: Learning Versatile Motion Skills for Humanoid Whole-Body Control.
Our paper introduces a physics-based control framework that enables humanoid robots to learn and reproduce challenging motions through multi-stage motion processing and adaptive policy training.
This repository includes:
- Motion processing pipeline
- Collect human motion from various sources (video, LAFAN, AMASS, etc.) to a unified SMPL format (
motion_source/) - Filter, correct and retarget human motion to the robot (
smpl_retarget/) - Visualize and analyze the processed motions (
smpl_vis/,robot_motion_process/)
- Collect human motion from various sources (video, LAFAN, AMASS, etc.) to a unified SMPL format (
- RL-based motion imitation framework (
humanoidverse/)- Train the policy in IsaacGym
- Deploy trained policies in MuJoCo for sim2sim verification. The framework is designed for easy extension--custom policies and real-world deployment modules can be plugged in with minimal effort
- Example data (
example/)- Sample motion data in our experiments (
example/motion_data/, you can visualize the motion data with tools inrobot_motion_process/) - A pretrained policy checkpoint (
example/pretrained_hors_stance_pose/)
- Sample motion data in our experiments (
Usage
-
Refer to
INSTALL.mdfor environment setup and installation instructions. -
Each module folder (e.g.,
humanoidverse,smpl_retarget) contains a dedicatedREADME.mdexplaining its purpose and usage. -
How to let your robot perform a new motion?
- Collect the motion data from the source and process the motion data to the SMPL format (
motion_source/). - Retarget the motion data to the robot (
smpl_retarget/, chooseMinkorPHCpipeline as you like). - Visualize the processed motion to check whether the motion quality is satisfiable (
smpl_vis/,robot_motion_process/). - Train a policy for the processed motion in IsaacGym (
humanoidverse/). - Deploy the policy in MuJoCo or real-world robot (
humanoidverse/).
- Collect the motion data from the source and process the motion data to the SMPL format (
Folder Structure
description: provide description file for SMPL and G1 robot.motion_source: docs for getting SMPL format data.smpl_retarget: tools for SMPL to G1 robot retargeting.smpl_vis: tools for visualizing SMPL format data.robot_motion_process: tools for processing robot format motion. Including visualization, interpolation, and trajectory analysis.humanoidverse: training RL policyexample: example motion and ckpt for using PBHC
Citation
If you find our work helpful, please cite:
@article{xie2025kungfubot,
title={KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills},
author={Xie, Weiji and Han, Jinrui and Zheng, Jiakun and Li, Huanyu and Liu, Xinzhe and Shi, Jiyuan and Zhang, Weinan and Bai, Chenjia and Li, Xuelong},
journal={Advances in Neural Information Processing Systems},
year={2025}
}
@article{han2025kungfubot2,
title={KungfuBot2: Learning Versatile Motion Skills for Humanoid Whole-Body Control},
author={Han, Jinrui and Xie, Weiji and Zheng, Jiakun and Shi, Jiyuan and Zhang, Weinan and Xiao, Ting and Bai, Chenjia},
journal={arXiv preprint arXiv:2509.16638},
year={2025}
}
License
This codebase is under CC BY-NC 4.0 license. You may not use the material for commercial purposes, e.g., to make demos to advertise your commercial products.
Acknowledgements
- ASAP: We use
ASAPlibrary to build our RL codebase. - Beyondmimic: We incorporate
Beyondmimicfeatures into policy training. - RSL_RL: We use
rsl_rllibrary for the PPO implementation. - Unitree: We use
Unitree G1as our testbed robot. - Maskedmimic: We use the retargeting pipeline in
Maskedmimic, which based on Mink. - PHC: We incorporate the retargeting pipeline from
PHCinto our implementation. - GVHMR: We use
GVHMRto extract motions from videos. - IPMAN: We filter motions based on
IPMANcodebase.
Contact
Feel free to open an issue or discussion if you encounter any problems or have questions about this project.
For collaborations, feedback, or further inquiries, please reach out to:
- Weiji Xie: xieweiji249@sjtu.edu.cn or Weixin
shisoul - Jinrui Han: jrhan82@sjtu.edu.cn or Weixin
Bw_rooneY - Chenjia Bai (Corresponding Author): baicj@chinatelecom.cn
- You can also join our weixin discussion group for timely Q&A. Since the group already exceeds 200 members, you'll need to first add one of the authors on Weixin to receive an invitation to join.
We welcome contributions and are happy to support the community in building upon this work!
Related Skills
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
last30days-skill
17.2kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary


