SkillAgentSearch skills...

CooHOI

[NeurIPS 2024 Spotlight] CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics

Install / Use

/learn @Winston-Gu/CooHOI
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics

<div align="center">

[Website] [Arxiv]

</div> <div style="text-align: center;"> <img src="assets/CooHOI.png" alt="Teaser" width=100% > </div>

Official Implementation of the paper "CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics".

News

  • 09/25/2024: :tada: CooHOI is accepted as NeurIPS 2024 spotlight. Thanks for the recognition!
  • 12/12/2024: :sparkles: Presented CooHOI at NeurIPS 2024, Vancouver. Check out our poster.
  • 12/19/2024: :tada: Code open-sourced!
  • 06/01/2025: :art: Fixed some bugs mentioned by Issues. Uploaded training curve plots FYI.

Installation

Download Isaac Gym from website, or using CLI commands:

wget https://developer.nvidia.com/isaac-gym-preview-4
tar -xvzf isaac-gym-preview-4

Create conda environment:

conda create -n coohoi python=3.8
conda activate coohoi

Install IsaacGym wrappers for Python:

pip install -e isaacgym/python

Install other dependencies:

pip install -r requirements.txt

If encountering following error: ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory, you need set the environment variables:

export LD_LIBRARY_PATH=/path/to/conda/envs/your_env/lib

Commands

Typical reward curves during training should be like this:

<div style="text-align: center;"> <img src="assets/RewardCurves.png" alt="Reward Curves" width=100% > </div>

Reproduce Results for our Paper

To see our results on single agent object carrying tasks:

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py --test \
--task HumanoidAMPCarryObject \
--num_envs 16 \
--cfg_env coohoi/data/cfg/humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/amp_humanoid_task.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--checkpoint coohoi/data/models/SingleAgent.pth

To see our results on 2 agent object carrying tasks:

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py --test \
--task ShareHumanoidCarryObject \
--num_envs 16 \
--cfg_env coohoi/data/cfg/share_humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/share_humanoid_task_coohoi.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--checkpoint coohoi/data/models/TwoAgent.pth

Single Humanoid SKill Training

Training Commands:

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py \
--task HumanoidAMPCarryObject \
--cfg_env coohoi/data/cfg/humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/amp_humanoid_task.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--headless \
--wandb_name "<experiement_name>"

You will find your checkpoints in output/Humanoid_<date>_<time>/nn dir, to eval:

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py --test \
--task HumanoidAMPCarryObject \
--num_envs 16 \
--cfg_env coohoi/data/cfg/humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/amp_humanoid_task.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--checkpoint <checkpoint_path>

e.g.

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py --test \
--task HumanoidAMPCarryObject \
--num_envs 16 \
--cfg_env coohoi/data/cfg/humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/amp_humanoid_task.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--checkpoint output/Humanoid_19-16-52-17/nn/Humanoid.pth

Two Humanoids Cooperation Training

By default, the two humanoids cooperation training starts from finetuning single humanoid policy. We load the single agent policy checkpoint in --checkpoint <ckpt_path>, and you can change this to your own checkpoint.

Cooperation Training:

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py \
--task ShareHumanoidCarryObject \
--cfg_env coohoi/data/cfg/share_humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/share_humanoid_task_coohoi.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--headless \
--is_finetune \
--pretrain_checkpoint <ckpt_path> \
--wandb \
--wandb_name "<experiement_name>"

Note: <ckpt_path> should be the relative path of single agent policy checkpoint. This policy checkpoint will be used for initializing cooperation policy.

e.g.,

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py \
--task ShareHumanoidCarryObject \
--cfg_env coohoi/data/cfg/share_humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/share_humanoid_task_coohoi.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--headless \
--is_finetune \
--pretrain_checkpoint coohoi/data/models/SingleAgent.pth \
--wandb \
--wandb_name "CooHOI Training"

Evaluation:

CUDA_VISIBLE_DEVICES=0 python coohoi/run.py --test \
--task ShareHumanoidCarryObject \
--num_envs 16 \
--cfg_env coohoi/data/cfg/share_humanoid_carrybox.yaml \
--cfg_train coohoi/data/cfg/train/share_humanoid_task_coohoi.yaml \
--motion_file coohoi/data/motions/coohoi_data/coohoi_data.yaml \
--checkpoint <ckpt_path>

Acknowledgement

Citation

If you use our code in your work, please consider citing our work:

@inproceedings{gao2024coohoi,
 title = {CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics},
 author = {Gao, Jiawei and Wang, Ziqin and Xiao, Zeqi and Wang, Jingbo and Wang, Tai and Cao, Jinkun and Hu, Xiaolin and Liu, Si and Dai, Jifeng and Pang, Jiangmiao},
 booktitle = {Advances in Neural Information Processing Systems},
 doi = {10.52202/079017-2532},
 pages = {79741--79763},
 url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/918b9487f8ea4661e8ba5a02b2126658-Paper-Conference.pdf},
 year = {2024}
}
View on GitHub
GitHub Stars165
CategoryEducation
Updated10d ago
Forks12

Languages

Python

Security Score

80/100

Audited on Mar 30, 2026

No findings