UNIC
Official repository of paper "UNIC: Neural Garment Deformation Field for Real-time Clothed Character Animation".
Install / Use
/learn @IGL-HKUST/UNICREADME
🚀 Getting Started
1. Environment Setup
We tested our environment on Ubuntu 20.04 LTS with CUDA 12.1, gcc 9.4.0, and g++ 9.4.0.
conda create python=3.10 --name unic
conda activate unic
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"
git clone https://github.com/unlimblue/KNN_CUDA.git
cd KNN_CUDA
make && make install
cd ..
2. Run Demo
python -m test --cfg configs/config_train_unic_jk.yaml
🔬 Training
1. Data Preparation
Overall, the data structure should be constructed like this:
<your_data_root>/unity_smpl/
|-- hanfu_dress
| |-- sequence_*
| | |-- deformation
| | | `-- s_*.obj
| | |-- motion
| | | `-- s_*.obj
| | `-- animation.fbx
|-- jk_dress
| |-- sequence_*
| | |-- deformation
| | | `-- s_*.obj
| | |-- motion
| | | `-- s_*.obj
| | `-- animation.fbx
|-- princess_dress
| |-- sequence_*
| | |-- deformation
| | | `-- s_*.obj
| | |-- motion
| | | `-- s_*.obj
| | `-- animation.fbx
|-- tshirt
| |-- sequence_*
| | |-- deformation
| | | `-- s_*.obj
| | |-- motion
| | | `-- s_*.obj
| | `-- animation.fbx
2. Data Pre-processing
First, install FBX Python SDK:
wget https://damassets.autodesk.net/content/dam/autodesk/www/files/fbx202037_fbxpythonsdk_linux.tar.gz
tar -xzf fbx202037_fbxpythonsdk_linux.tar.gz --no-same-owner
mkdir <your_path_for_fbx>
chmod 777 fbx202037_fbxpythonsdk_linux
./fbx202037_fbxpythonsdk_linux <your_path_for_fbx>
cd <your_path_for_fbx>
conda activate unic
python -m pip install fbx-2020.3.7-cp310-cp310-manylinux1_x86_64.whl
[!TIP] If you encounter the error
libc.so.6: version GLIBC_2.28 not foundwhenimport fbx, try to adddeb http://security.debian.org/debian-security buster/updates mainin your/etc/apt/sources.list. Then, run the following commands:sudo apt update # if NO_PUBKEY error occurs, run the commented command: # sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 112695A0E562B32A 54404762BBB6E853 # sudo apt update sudo apt list --upgradable sudo apt install libc6-dev libc6
Second, run the pre-processing code:
python -m data.preprocess.unic
The pre-processed data will be saved in <your_data_path>/pre_processed/ folder.
3. Train UNIC
[!NOTE] We tested our training code on NVIDIA RTX 3090 and NVIDIA RTX 2080Ti GPUs.
SetUSE_DDP = Falseintrain.pyto disable DDP (Distributed Data Parrallel) while training if you need.
As an example, run the following command to train a deformation field for jk dress:
python -m train --cfg configs/unic_jk_dress.yaml --nodebug
Checkpoints will be saved in checkpoints/ folder. In our experiment, we choose epoch300.pth for all the comparisons, evaluations and presentations.
Acknowledgments
Thanks to the following work that we refer to and benefit from:
- Codebook Matching: the categorical encoder architecture and the Unity project framework;
- NeRF-Pytorch: the neural field implementation;
- SMPL-to-FBX: the FBX Python SDK usage;
- HOOD: the visualization code
Licenses
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/80x15.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
Related Skills
node-connect
352.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
