LATENT
Official implementation of Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data
Install / Use
/learn @GalaxyGeneralRobotics/LATENTREADME
This is the official implementation of Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data. This repository provides an open-source humanoid robot learning pipeline for motion tracker pre-training, online distillation, and high-level policy learning. The pipeline uses MuJoCo for simulation and supports multi-GPU parallel training.
News 🚩
[March 13, 2026] Tracking codebase and a small subset of human tennis motion data released. Now you can track these motions, with the tracking pipeline described in our paper.
TODOs
- [x] Release motion tracking codebase
- [x] Release a small subset of human tennis motion data
- [ ] Release all human tennis motion data we used
- [ ] Release pretrained trackers to track all released human tennis motion data
- [ ] Release DAgger online distillation codebase
- [ ] Release pretrained latent action model trained on our tennis motion data
- [ ] Release high-level tennis-playing policy training codebase
- [ ] Release sim2real designs for high-level tennis-playing policy
- [ ] Release more pretrained checkpoints
Initialization
-
Clone the repository:
git clone git@github.com:GalaxyGeneralRobotics/LATENT.git -
Create a virtual environment and install dependencies:
uv sync -i https://pypi.org/simple -
Create a
.envfile in the project directory with the following content:export GLI_PATH=<absolute_project_path> export WANDB_PROJECT=<your_project_name> export WANDB_ENTITY=<your_entity_name> export WANDB_API_KEY=<your_wandb_api_key> -
Download the retargeted tennis data and put them under
storage/data/mocap/Tennis/.The file structure should be like:
storage/data ├── mocap │ └── Tennis │ ├──p1 │ │ ├── High-Hit02_Tennis\ 001.npz │ │ └── ... │ └── ... └── assets └── ... -
Initialize assets
python latent_mj/app/mj_playground_init.py
Usage
Initialize environment
source .venv/bin/activate; source .env;
Motion tracking
The motion tracker training pipeline refers to the implementation in OpenTrack.
Train the model
# Train without DR
python -m latent_mj.learning.train.train_ppo_track_tennis --task G1TrackingTennis --exp_name <your_exp_name>
# Train with DR
python -m latent_mj.learning.train.train_ppo_track_tennis --task G1TrackingTennisDR --exp_name <your_exp_name>
Evaluate the model
First, convert the Brax model checkpoint to ONNX:
python -m latent_mj.app.brax2onnx_tracking --task G1TrackingTennis --exp_name <your_exp_name>
Next, run the evaluation script:
python -m latent_mj.eval.tracking.mj_onnx_video --task G1TrackingTennis --exp_name <your_exp_name> [--use_viewer] [--use_renderer] [--play_ref_motion]
Real-World Deployment
For teams interested in reproducing our system, we provide the following real-world deployment details for reference:
- A total of 50+ motion capture cameras were used
- Camera resolution: 2048 × 2048, at 120 Hz
- Motion capture area: 19 × 15 meters
Our real-world experiment setup (including the venue, camera system, lighting, and related infrastructure) was supported by a third-party motion capture service provider. The experiment period lasted approximately 3 weeks, with a total rental cost of around 350k RMB (approximately 50k USD).
Acknowledgement
This repository is build upon jax, brax, loco-mujoco, mujoco_playground, and OpenTrack.
If you find this repository helpful, please cite our work:
@misc{zhang2026learningathletichumanoidtennis,
title={Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data},
author={Zhikai Zhang and Haofei Lu and Yunrui Lian and Ziqing Chen and Yun Liu and Chenghuai Lin and Han Xue and Zicheng Zeng and Zekun Qi and Shaolin Zheng and Qing Luan and Jingbo Wang and Junliang Xing and He Wang and Li Yi},
year={2026},
eprint={2603.12686},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2603.12686},
}
