SkillAgentSearch skills...

GVHMR

Code for "GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates", Siggraph Asia 2024

Install / Use

/learn @zju3dv/GVHMR
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates

Project Page | Paper

World-Grounded Human Motion Recovery via Gravity-View Coordinates
Zehong Shen<sup>*</sup>, Huaijin Pi<sup>*</sup>, Yan Xia, Zhi Cen, Sida Peng<sup></sup>, Zechen Hu, Hujun Bao, Ruizhen Hu, Xiaowei Zhou
SIGGRAPH Asia 2024

<p align="center"> <img src=docs/example_video/project_teaser.gif alt="animated" /> </p>

News 🔥

  • [2025-03-08] By default not using DPVO. We implemented a SimpleVO, which is more efficient and compatible with GVHMR.
  • [2025-03-08] We added a new option f_mm to specify the focal length of the fullframe camera in mm.

Setup

Please see installation for details.

Quick Start

<img src="https://i.imgur.com/QCojoJk.png" width="30"> Google Colab demo for GVHMR

<img src="https://s2.loli.net/2024/09/15/aw3rElfQAsOkNCn.png" width="20"> HuggingFace demo for GVHMR

Demo

Demo entries are provided in tools/demo. Use -s to skip visual odometry if you know the camera is static, otherwise the camera will be estimated by DPVO. We also provide a script demo_folder.py to inference a entire folder.

python tools/demo/demo.py --video=docs/example_video/tennis.mp4 -s
python tools/demo/demo_folder.py -f inputs/demo/folder_in -d outputs/demo/folder_out -s

Reproduce

  1. Test: To reproduce the 3DPW, RICH, and EMDB results in a single run, use the following command:

    python tools/train.py global/task=gvhmr/test_3dpw_emdb_rich exp=gvhmr/mixed/mixed ckpt_path=inputs/checkpoints/gvhmr/gvhmr_siga24_release.ckpt
    

    To test individual datasets, change global/task to gvhmr/test_3dpw, gvhmr/test_rich, or gvhmr/test_emdb.

  2. Train: To train the model, use the following command:

    # The gvhmr_siga24_release.ckpt is trained with 2x4090 for 420 epochs, note that different GPU settings may lead to different results.
    python tools/train.py exp=gvhmr/mixed/mixed
    

    During training, note that we do not employ post-processing as in the test script, so the global metrics results will differ (but should still be good for comparison with baseline methods).

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{shen2024gvhmr,
  title={World-Grounded Human Motion Recovery via Gravity-View Coordinates},
  author={Shen, Zehong and Pi, Huaijin and Xia, Yan and Cen, Zhi and Peng, Sida and Hu, Zechen and Bao, Hujun and Hu, Ruizhen and Zhou, Xiaowei},
  booktitle={SIGGRAPH Asia Conference Proceedings},
  year={2024}
}

Acknowledgement

We thank the authors of WHAM, 4D-Humans, and ViTPose-Pytorch for their great works, without which our project/code would not be possible.

View on GitHub
GitHub Stars1.5k
CategoryDevelopment
Updated11h ago
Forks173

Languages

Jupyter Notebook

Security Score

80/100

Audited on Apr 2, 2026

No findings