DexGarmentLab
[NeurIPS 2025 Spotlight ๐] DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy
Install / Use
/learn @wayrise/DexGarmentLabREADME

DexGarmentLab includes three major components:
- Environment: We propose <u>Dexterous Garment Manipulation Environment</u> with 15 different task scenes (especially for bimanual coordination) based on 2500+ garments.
- Automated Data Collection: Because of the same structure of category-level garment, category-level generalization is accessible, which empowers our proposed <u>Automated Data Collection Pipeline</u> to handle different position, deformation and shapes of garment with task config (including grasp position and task sequence) and grasp hand pose provided by single expert demonstration.
- Generalizable Policy: With diverse collected demonstration data, we introduce <u> Hierarchical gArment manipuLation pOlicy (HALO) </u>, combining affordance points and trajectories to generalize across different attributes in different tasks.
๐ข MileStone
-
[x] (2025.04.25) DexGarmentLab Simulation Environment Release !
-
[x] (2025.04.25) DexGarmentLab Automated Data Collection Pipeline Release !
-
[x] (2025.05.09) DexGarmentLab Baselines and Generalizable Policy Release !
-
[x] (2025.05.09) DexGarmentLab Policy Validation Environment Release !
-
[x] (2025.05.10) DexGarmentLab Dataset of Garment Manipulation Tasks Release !
๐ Usage
1. IsaacSim Download
DexGarmentLab is built upon IsaacSim 4.5.0, please refer to NVIDIA Official Document for download.
We recommend placing the Isaac Sim source folder at `~/isaacsim_4.5.0` to match the Python interpreter path specified in the `.vscode/settings.json` file we provide. If you prefer to use a custom location, please make sure that the Python interpreter path in `.vscode/settings.json` is updated accordingly.
We will use ~/isaacsim_4.5.0/python.sh to run the isaacsim's python file. To facilitate the running, we can define a new alias in '.bashrc' file.
echo 'alias isaac="~/isaacsim_4.5.0/python.sh"' >> ~/.bashrc
source ~/.bashrc
2. Pull Repo
git clone git@github.com:wayrise/DexGarmentLab.git
3. Project Assets Download
Download Robots, LeapMotion, Scene, Garment directory from huggingface.
We provide automated download script in the Assets directory.
Following the instructions, you can download all the assets.
isaac Assets/assets_download.py
unzip Robots.zip -d ./Assets
unzip LeapMotion.zip -d ./Assets
unzip Scene.zip -d ./Assets
unzip Garment.zip -d ./Assets
unzip Human.zip -d ./Assets
4. Additional Environment Dependencies for Project
isaac -m pip install -r requirements.txt
๐๏ธ Simulation Environment

We introduce 15 garment manipulation tasks across 8 categories, encompassing:
-
Garment-Self-Interaction Task:
Fling Tops,Fling Dress,Fling Trousers,Fold Tops,Fold Dress,Fold Trousers. The key variables include garment position, orientation, and shape. -
Garment-Environment-Interaction Task:
Hang Dress,Hang Tops,Hang Trousers,Hang Coat,Wear Scarf,Wear Bowl Hat,Wear Baseball Cap,Wear Glove,Store Tops. The key variables include garment position, garment orientation, garment shape and environment-interaction assets positions (e.g., hangers, pothooks, humans, etc.)
you can run python files in 'Env_StandAlone' using following commands:
# e.g. Fixed Garment Shape, Position, Orientation and Environment Assets Position
isaac Env_StandAlone/Hang_Coat_Env.py
# There are some args you can choose
# 1. --env_random_flag :
# True/False, Whether enable environment randomization (including position)
# This flag only work when task belongs to Garment-Environment-Interaction Task
# 2. --garment_random_flag:
# True/False, Whether enable garment randomization (including position, orientation, shape)
# 3. --record_video_flag:
# True/False, Whether record whole-procedure video.
# 4. --data_collection_flag:
# True/False, Whether collect data (for policy training).
# e.g.
isaac Env_StandAlone/Hang_Coat_Env.py --env_random_flag True --garment_random_flag True
# means in Hang_Coat_Env, enable environment and garment randomization and execute the program.
โ๏ธ Automated Data Collection
Autually our data collection procedure has been embedded into Env_StandAlone/<Task_Name>_Env.py mentioned above. The only required step is to set --data_collection_flag to True.
We provide Data_Collection.sh for convenience:
# usage template: bash Data_Collection.sh <task_name> <demo_num>
# e.g.
bash Data_Collection.sh Hang_Coat 10
# 10 pieces of data will be saved into 'Data/Hang_Coat'.
# including:
# - final_state_pic: .png file, picture of final garment state, used for manual verification of task success.
# - train_data: .npz file, used for training data storage.
# - video: .mp4 file, recording whole-procedure video.
# - data_collection_log.txt: recording data collection result, corresponding assets and task configurations.
You can also download our prepared data from huggingface and unzip them into Data folder. The file structure should be like:
Data/
โโโ Hang_Coat/
โ โ โโโ final_state_pic
โ โ โโโ train_data
โ โ โโโ video
โ โ โโโ data_collection_log.txt
......
โโโ Fling_Dress/
โ โ โโโ final_state_pic
โ โ โโโ train_data
โ โ โโโ video
โ โ โโโ data_collection_log.txt
we provide data-download script for convenience:
isaac Data/data_download.py
# after download, please unzip them into Data/
๐ Generalizable Policy
Our policy HALO consists:
- Garment Affordance Model (GAM), which is used to generate target manipulation points for robot's movement. The corrsponding affordance map will also be used as denosing condition for SADP.
- Structure-Aware Diffusion Policy (SADP), which is used to generate robot's subsequent movement aware of garment's structure after moving to the target manipulation points.
They can be found all in 'Model_HALO/' directory.
GAM
The file structure of GAM is as follows:
GAM/
โโโ checkpoints/ # checkpoints of trained GAM for different category garment
โโโTops_LongSleeve/ # garment category
โโโassets_list.txt # list of assets used for validation
โโโassets_training_list.txt # list of assets used for training
โโโcheckpoint.pth # trained model
โโโdemo_garment.ply # demo garment point cloud
......
โโโTrousers/
โโโ model # meta files of GAM
โโโ GAM_Encapsulation.py # encapsulation of GAM
For the detailed use of GAM, please refer to GAM_Usage.md. The files in 'Env_StandAlone/' also provide example of how to use GAM.
SADP
SADP is suitable for Garment-Environment-Interaction tasks. All the related tasks only have one stage.
- Installation
cd Model_HALO/SADP
isaac -m pip install -e .
- Data Preparation
We need to pre-process .npz data collected in 'Data/' to .zarr data for training.
The only thing you need to do is just runing 'data2zarr_sadp.sh' in 'Model_HALO/SADP'.
cd Model_HALO/SADP
# usage template:
# bash data2zarr_sadp.sh <task_name> <stage_index> <train_data_num>
bash data2zarr_sadp.sh Hang_Coat 1 100
# Detailed parameters information can be found in the 'data2zarr_sadp.sh' file
The processed data will be saved in 'Model_HALO/SADP/data'. If you wanna train SADP in your headless service, please move the data to the same position.
- Training
cd Model_HALO/SADP
# usage template:
# python train.py <task_name> <expert_data_num> <seed> <gpu_id> <DEBUG_flag>
bash train.sh Hang_Coat_stage_1 100 42 0 False
# Detailed parameters information can be found in the 'train.sh' file
# Before training, we recommend you to set DEBUG_flag to True to check the training process.
The checkpoints will be saved in 'Model_HALO/SADP/checkpoints'.
SADP_G
SADP_G is suitable for Garment-Self-Interaction tasks, which means the denosing conditions exclude interaction-object point cloud. Fold_Tops and Fold_Dress have three stages. Fold_Trousers, Fling_Dress, Fling_Tops have two stages. Fling_Trousers only have one stage.
All the procedure are the same as SADP.
- Installation
cd Model_HALO/SADP_G
isaac -m pip install -e .
- Data Preparation
cd Model_HALO/SADP
# usage template:
# bash data2zarr_sadp_g.sh <task_name> <stage_index> <train_data_num>
bash data2zarr_sadp_g.sh Fold_Tops 2 100
# Detailed parameters information can be
Related Skills
node-connect
346.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
346.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
346.8kQQBot ๅฏๅชไฝๆถๅ่ฝๅใไฝฟ็จ <qqmedia> ๆ ็ญพ๏ผ็ณป็ปๆ นๆฎๆไปถๆฉๅฑๅ่ชๅจ่ฏๅซ็ฑปๅ๏ผๅพ็/่ฏญ้ณ/่ง้ข/ๆไปถ๏ผใ
