SkillAgentSearch skills...

PoseVocab

Code of [SIGGRAPH 2023] "PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling"

Install / Use

/learn @lizhe00/PoseVocab
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center">

<b>PoseVocab</b>: Learning Joint-structured Pose Embeddings for Human Avatar Modeling

<h2>SIGGRAPH 2023</h2>

Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu

Tsinghua Univserity

Projectpage · Paper · Video

</div>

Introduction

We propose PoseVocab, a novel pose encoding method that encodes dynamic human appearances under various poses for human avatar modeling.

https://user-images.githubusercontent.com/61936670/243704320-991c017f-16aa-4bda-814c-579a4a7be784.mp4

Installation

Clone this repo, then run the following scripts.

cd ./utils/posevocab_custom_ops
python setup.py install
cd ../..

SMPL-X & Pretrained Models

Run on THuman4.0 Dataset

Dataset Preparation

  • Download THuman4.0 dataset. Let's take "subject00" as an example, and denote the root data directory as SUBJECT00_DIR.
  • Specify the data directory and training frame list in gen_data/main_preprocess.py, then run the following scripts.
cd ./gen_data
python main_preprocess.py
cd ..

Training

Note: In the first training stage, our method reconstructs depth maps for the depth-guided sampling in the next stages. If you want to skip the first stage, you can download our provided depth maps from this link, unzip it to SUBJECT00_DIR/depths, and directly run python main.py -c configs/subject00.yaml -m train until the network converges.

python main.py -c configs/subject00.yaml -m train
  • Stage 2: render depth maps.
python main.py -c configs/subject00.yaml -m render_depth_sequences
python main.py -c configs/subject00.yaml -m train

Testing

Download testing poses from this link, unzip them to somewhere, denoted as TESTING_POSE_DIR.

  • Specify prev_ckpt in configs/subject00.yaml#L78 as the pretrained model ./pretrained_models/subject00 or the trained one by yourself.
  • Specify data_path in configs/subject00.yaml#L60 as the testing pose path, e.g., TESTING_POSE_DIR/thuman4/pose_01.npz.
  • Run the following script.
python main.py -c configs/subject00.yaml -m test
  • The output results can be found in ./test_results/subject00.

License

MIT License. SMPL-X related files are subject to the license of SMPL-X.

Citation

If you find our code or paper is useful to your research, please consider citing:

@inproceedings{li2023posevocab,
  title={PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling},
  author={Li, Zhe and Zheng, Zerong and Liu, Yuxiao and Zhou, Boyao and Liu, Yebin},
  booktitle={ACM SIGGRAPH Conference Proceedings},
  year={2023}
}
View on GitHub
GitHub Stars165
CategoryEducation
Updated1mo ago
Forks11

Languages

Python

Security Score

85/100

Audited on Feb 2, 2026

No findings