SkillAgentSearch skills...

ICON

[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals

Install / Use

/learn @YuliangXiu/ICON

README

<!-- PROJECT LOGO --> <p align="center"> <h1 align="center">ICON: Implicit Clothed humans Obtained from Normals</h1> <p align="center"> <a href="https://ps.is.tuebingen.mpg.de/person/yxiu"><strong>Yuliang Xiu</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/jyang"><strong>Jinlong Yang</strong></a> · <a href="https://ps.is.mpg.de/~dtzionas"><strong>Dimitrios Tzionas</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a> </p> <h2 align="center">CVPR 2022</h2> <div align="center"> <img src="./assets/teaser.gif" alt="Logo" width="100%"> </div> <p align="center"> <br> <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a> <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a> <a href='https://colab.research.google.com/drive/1-AWeWhPvCTBX0KfMtgtMk10uPU05ihoA?usp=sharing' style='padding-left: 0.5rem;'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a> <a href="https://huggingface.co/spaces/Yuliang/ICON" style='padding-left: 0.5rem;'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-orange'></a><br></br> <a href='https://arxiv.org/abs/2112.09127'> <img src='https://img.shields.io/badge/Paper-PDF-green?style=for-the-badge&logo=arXiv&logoColor=green' alt='Paper PDF'> </a> <a href='https://icon.is.tue.mpg.de/' style='padding-left: 0.5rem;'> <img src='https://img.shields.io/badge/ICON-Page-orange?style=for-the-badge&logo=Google%20chrome&logoColor=orange' alt='Project Page'> <a href="https://discord.gg/Vqa7KBGRyk"><img src="https://img.shields.io/discord/940240966844035082?color=7289DA&labelColor=4a64bd&logo=discord&logoColor=white&style=for-the-badge"></a> <a href="https://youtu.be/hZd6AYin2DE"><img alt="youtube views" title="Subscribe to my YouTube channel" src="https://img.shields.io/youtube/views/hZd6AYin2DE?logo=youtube&labelColor=ce4630&style=for-the-badge"/></a> </p> </p> <br /> <br />

News :triangular_flag_on_post:

  • [2022/12/15] ICON belongs to the past, ECON is the future!
  • [2022/09/12] Apply KeypointNeRF on ICON, quantitative numbers in evaluation
  • [2022/07/30] <a href="https://huggingface.co/spaces/Yuliang/ICON" style='padding-left: 0.5rem;'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-orange'></a> <a href='https://colab.research.google.com/drive/1-AWeWhPvCTBX0KfMtgtMk10uPU05ihoA?usp=sharing' style='padding-left: 0.5rem;'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a> are both available
  • [2022/07/26] New cloth-refinement module is released, try -loop_cloth
  • [2022/06/13] ETH Zürich students from 3DV course create an add-on for garment-extraction
  • [2022/05/16] <a href="https://github.com/Arthur151/ROMP">BEV</a> is supported as optional HPS by <a href="https://scholar.google.com/citations?hl=en&user=fkGxgrsAAAAJ">Yu Sun</a>, see commit #060e265
  • [2022/05/15] Training code is released, please check Training Instruction
  • [2022/04/26] <a href="https://github.com/Jeff-sjtu/HybrIK">HybrIK (SMPL)</a> is supported as optional HPS by <a href="https://jeffli.site/">Jiefeng Li</a>, see commit #3663704
  • [2022/03/05] <a href="https://github.com/YadiraF/PIXIE">PIXIE (SMPL-X)</a>, <a href="https://github.com/mkocabas/PARE">PARE (SMPL)</a>, <a href="https://github.com/HongwenZhang/PyMAF">PyMAF (SMPL)</a> are all supported as optional HPS
<br> <!-- TABLE OF CONTENTS --> <details open="open" style='padding: 10px; border-radius:5px 30px 30px 5px; border-style: solid; border-width: 1px;'> <summary>Table of Contents</summary> <ol> <li> <a href="#who-needs-ICON">Who needs ICON</a> </li> <li> <a href="#instructions">Instructions</a> </li> <li> <a href="#running-demo">Running Demo</a> </li> <li> <a href="#citation">Citation</a> </li> </ol> </details> <br /> <br />

Who needs ICON?

  • If you want to Train & Evaluate on PIFu / PaMIR / ICON using your own data, please check dataset.md to prepare dataset, training.md for training, and evaluation.md for benchmark evaluation.

  • Given a raw RGB image, you could get:

    • image (png):
      • segmented human RGB
      • normal maps of body and cloth
      • pixel-aligned normal-RGB overlap
    • mesh (obj):
      • SMPL-(X) body from PyMAF, PIXIE, PARE, HybrIK, BEV
      • 3D clothed human reconstruction
      • 3D garments (requires 2D mask)
    • video (mp4):
      • self-rotated clothed human

| Intermediate Results | | :-------------------------------------------------------------: | | ICON's intermediate results | | Iterative Refinement | | ICON's SMPL Pose Refinement | | Final Results | | Image -- overlapped normal prediction -- ICON -- refined ICON | | 3D Garment | | 3D Garment extracted from ICON using 2D mask |

<br>

Instructions

<br>

Running Demo

cd ICON

# model_type:
#   "pifu"            reimplemented PIFu
#   "pamir"           reimplemented PaMIR
#   "icon-filter"     ICON w/ global encoder (continous local wrinkles)
#   "icon-nofilter"   ICON w/o global encoder (correct global pose)
#   "icon-keypoint"   ICON w/ relative-spatial encoding (insight from KeypointNeRF)

python -m apps.infer -cfg ./configs/icon-filter.yaml -gpu 0 -in_dir ./examples -out_dir ./results -export_video -loop_smpl 100 -loop_cloth 200 -hps_type pixie

More Qualitative Results

| Comparison | | :----------------------------------------------------------: | | Comparison with other state-of-the-art methods | | extreme | | Predicted normals on in-the-wild images with extreme poses |

<br/> <br/>

Citation

@inproceedings{xiu2022icon,
  title     = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
  author    = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2022},
  pages     = {13296-13306}
}

Acknowledgments

We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.

Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under

View on GitHub
GitHub Stars1.7k
CategoryDevelopment
Updated3d ago
Forks221

Languages

Python

Security Score

85/100

Audited on Mar 22, 2026

No findings