SkillAgentSearch skills...

IMAGDressing

[AAAI 2025]👔IMAGDressing👔: Interactive Modular Apparel Generation for Virtual Dressing. It enables customizable human image generation with flexible garment, pose, and scene control, ensuring high fidelity and garment consistency for virtual dressing.

Install / Use

/learn @muzishen/IMAGDressing
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

👔IMAGDressing👔: Interactive Modular Apparel Generation for Virtual Dressing

📦️ Release

  • [2025/05/30] 🔥 The supplementary materials for IMAGDressing-v1 are available here.
  • [2024/12/10] 🔥 IMAGDressing-v1 is accepted by AAAI 2025.
  • [2024/08/24] 🔥 We add the train code, feel free to give it a try!
  • [2024/08/23] 🔥 We release the IGPair dataset publicly available for download.
  • [2024/07/30] 🔥 We release the WebUI Code for gradio interface.
  • [2024/07/26] 🔥 We release the online webui,thanks to ZeroGPU for providing free A100 GPUs. And the original Gradio_demo is soon to be deprecated.
  • [2024/07/19] 🔥 We release the code and examples for cartoon-style virtual dressing.
  • [2024/07/18] 🔥 We release the technical report of IMAGDressing-v1 and CAMI metric code.
  • [2024/07/16] 🔥 We add the batch inference for full VD and VTON. Thanks @ZhaoChaoqun for the contribution.
  • [2024/07/01] 🔥 We release the test cases in the assets/images directory.
  • [2024/06/21] 🔥 We release the inpainting feature to enable outfit changing. Experimental Feature.
  • [2024/06/13] 🔥 We release the Gradio_demo of IMAGDressing-v1 (Service deprecation imminent).
  • [2024/05/28] 🔥 We release the inference code of SD1.5 that is compatible with IP-Adapter and ControlNet.
  • [2024/05/08] 🔥 We launch the project page of IMAGDressing-v1.

IMAGDressing-v1: Customizable Virtual Dressing

<a href='https://imagdressing.github.io/'><img src='https://img.shields.io/badge/Project-Page-green'></a> <a href='http://arxiv.org/abs/2407.12705'><img src='https://img.shields.io/badge/Technique-Report-red'></a> <a href='https://huggingface.co/feishen29/IMAGDressing'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> <a href='https://huggingface.co/datasets/IMAGDressing/IGPair'><img src='https://img.shields.io/badge/Dataset-IGPair-orange'></a> GitHub stars

🚀 Key Features:

  1. Simple Architecture: IMAGDressing-v1 generates lifelike garments and facilitates easy user-driven scene editing.
  2. New Task, Metric, and Dataset: Introduces the virtual dressing (VD) task, designs a comprehensive affinity metric index (CAMI), and releases the IGPair dataset.
  3. Flexible Plugin Compatibility: Seamlessly integrates with extension plugins such as IP-Adapter, ControlNet, T2I-Adapter, and AnimateDiff.
  4. Rapid Customization: Allows for rapid customization within seconds without the need for additional LoRA training.

🔥 Dataset Demo

You can download the dataset from Baidu Cloud or Huggingface Dataset. By requesting access, you agree to use the data only for academic and personal purposes and not for commercial use.

Dataset Demo

🔥 Examples

<div style="display: flex; justify-content: space-around;"> <img src="assets/scrolling_images1.gif" alt="GIF 1" width="200" /> <img src="assets/scrolling_images2.gif" alt="GIF 2" width="200" /> <img src="assets/scrolling_images3.gif" alt="GIF 3" width="200" /> <img src="assets/scrolling_images4.gif" alt="GIF 4" width="200" /> </div>

compare

<span style="color:red">Conbined with IP-Adapter and Controlnet-Pose</span>

compare

compare

<span style="color:red">Support text prompts for different scenes</span>

different scenes

<span style="color:red">Supports outfit changing in specified areas (Experimental Feature)</span>

inpainting

<span style="color:red">Supports generating cartoon-style images (Experimental Feature)</span>

cartoon

🏷️ Introduction

To address the need for flexible and controllable customizations in virtual try-on systems, we propose IMAGDressing-v1. Specifically, we introduce a garment UNet that captures semantic features from CLIP and texture features from VAE. Our hybrid attention module includes a frozen self-attention and a trainable cross-attention, integrating these features into a frozen denoising UNet to ensure user-controlled editing. We will release a comprehensive dataset, IGPair, with over 300,000 pairs of clothing and dressed images, and establish a standard data assembly pipeline. Furthermore, IMAGDressing-v1 can be combined with extensions like ControlNet, IP-Adapter, T2I-Adapter, and AnimateDiff to enhance diversity and controllability.

framework

🔧 Requirements

conda create --name IMAGDressing python=3.8.10
conda activate IMAGDressing
pip install -U pip

# Install requirements
pip install -r requirements.txt

🌐 Download Models

You can download our models from HuggingFace or 百度云. You can download the other component models from the original repository, as follows.

🎉 How to Train

# Please download the IGPair data first and modify the path in run.sh
sh run.sh

🎉 How to Test

<span style="color:red">Important Reminder</span>

1. Random faces and poses to dress the assigned clothes

python inference_IMAGdressing.py --cloth_path [your cloth path]

2. Random faces use a given pose to dress a given outfit

python inference_IMAGdressing_controlnetpose.py --cloth_path [your cloth path] --pose_path [your posture path]

3. Specify the face and posture to wear the specified clothes

python inference_IMAGdressing_ipa_controlnetpose.py --cloth_path [your cloth path] --face_path [your face path] --pose_path [your posture path]

4. Specify the model to wear the specified clothes (Experimental Feature)

<span style="color:red">Please download the humanparsing and openpose model file from IDM-VTON-Huggingface to the ckpt folder first.</span>

python inference_IMAGdressing_controlnetinpainting.py --cloth_path [your cloth path] --model_path [your model path]

5. Specify the carton style for generate images (Experimental Feature)

python inference_IMAGdressing_counterfeit-v30.py --cloth_path [your cloth path] --model_path [your model path]

🎉 How to Eval

Evaluating the CAMI-U score

Please use our inference_IMAGdressing.py to generate model images. Then, generate the cloth_mask based on the model image. You can use the Self-Correction-Human-Parsing to generate cloth mask. Finally, use the following code to evaluate the score for image generation without specified pose, face, and text scenarios.

python metric/eval.py

Evaluating the CAMI-S score

First, use inference_IMAGdressing_ipa_controlnetpose.py to generate model images. Then, generate the cloth mask based on the model image. Finally, use the following code to evaluate the image generation score for specified pose, face, and text scenarios.

python metric/eval_s.py

🤗Gradio interface 🤗

We also provide a Gradio <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a> interface for a better experience, just run by:

pip install modelscope==1.15.0
pip install mmcv-full==1.7.2
pip install mmdet==2.26.0

python app.py  --model_weight $MODEL PATH --server_port  7860

You can specify the --server_port arguments to satisfy your needs!

Or, try it out effortlessly on HuggingFace 🤗

📚 Get Involved

Join us on this exciting journey to transform virtual dressing systems. Star⭐️ our repository to stay updated with the latest advancements, and contribute to making **IMAGD

Related Skills

docs-writer

99.0k

`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie

model-usage

334.5k

Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.

pr

for a github pr, please respond in the following format - ## What type of PR is this? - [ ] 🍕 Feature - [ ] 🐛 Bug Fix - [ ] 📝 Documentation - [ ] 🧑‍💻 Code Refactor - [ ] 🔧 Other ## Description <!-- What changed and why? Optional: include screenshots or other supporting artifacts. --> ## Related Issues <!-- Link issues like: Fixes #123 --> ## Updated requirements or dependencies? - [ ] Requirements or dependencies added/updated/removed - [ ] No requirements changed ## Testing - [ ] Tests added/updated - [ ] No tests needed **How to test or why no tests:** <!-- Describe test steps or explain why tests aren't needed --> ## Checklist - [ ] Self-reviewed the code - [ ] Tests pass locally - [ ] No console errors/warnings ## [optional] What gif best describes this PR?

Design

Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t

View on GitHub
GitHub Stars1.3k
CategoryContent
Updated4d ago
Forks120

Languages

Python

Security Score

100/100

Audited on Mar 20, 2026

No findings