SkillAgentSearch skills...

OmniPart

[SIGGRAPH Asia 2025] OmniPart: Part-Aware 3D Generation with Semantic Decoupling and Structural Cohesion

Install / Use

/learn @HKU-MMLab/OmniPart
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

OmniPart: Part-Aware 3D Generation with Semantic Decoupling and Structural Cohesion [SIGGRAPH Asia 2025]

<div align="center">

Project Page Paper Model Online Demo

</div>

teaser

🔥 Updates

📅 October 2025

  • Pretrained models, interactive demo, training code and data processing.

🔨 Installation

Clone the repo:

git clone https://github.com/HKU-MMLab/OmniPart
cd OmniPart

Create a conda environment (optional):

conda create -n omnipart python=3.10
conda activate omnipart

Install dependencies:

pip install -r requirements.txt

💡 Usage

Launch Demo

python app.py

Inference Scripts

If running OmniPart with command lines, you need to obtain the segmentation mask of the input image first. The mask is saved as a .exr file with the shape [h, w, 3], where the last dimension contains the 2D part_id replicated across all three channels.

python -m scripts.inference_omnipart --image_input {IMAGE_PATH} --mask_input {MASK_PATH}

The required model weights will be automatically downloaded:

  • OmniPart model from OmniPart → local directory ckpt/

Training

Data processing

Step 1: Render multi-view images of parts and overall shapes, following TRELLIS Step 4.

Step 2: Voxelize parts and overall shapes with dataset_toolkits/voxelize_part.py and dataset_toolkits/voxelize_overall.py.

Step 3: Extract DINO features of parts and overall shapes, following TRELLIS Step 6.

Step 4: Encode SLat of parts and overall shapes, following TRELLIS Step 8.

Step 5: Merge SLat of parts and overall shapes with dataset_toolkits/merge_slat.py.

Step 6: Render image and mask conditions with dataset_toolkits/blender_render_img_mask.py.

Training code

Fill in the values for data_root, train_mesh_list, val_mesh_list and denoiser in configs/training_part_synthesis.json. The denoiser field requires the path to a diffusion model checkpoint in .pt format (using training/utils/transfer_st_pt.py) that you wish to finetune, for example: ckpt/slat_flow_img_dit_L_64l8p2_fp16.pt.

python train.py --config configs/training_part_synthesis.json --output_dir {OUTPUT_PATH} --data_dir {SLat_PATH}

⭐ Acknowledgements

We would like to thank the following open-source projects and research works that made OmniPart possible:

We are grateful to the broader research community for their open exploration and contributions to the field of 3D generation.

📚 Citation

@article{yang2025omnipart,
        title={Omnipart: Part-aware 3d generation with semantic decoupling and structural cohesion},
        author={Yang, Yunhan and Zhou, Yufan and Guo, Yuan-Chen and Zou, Zi-Xin and Huang, Yukun and Liu, Ying-Tian and Xu, Hao and Liang, Ding and Cao, Yan-Pei and Liu, Xihui},
        journal={arXiv preprint arXiv:2507.06165},
        year={2025}
}
View on GitHub
GitHub Stars197
CategoryDevelopment
Updated4d ago
Forks17

Languages

Python

Security Score

95/100

Audited on Mar 17, 2026

No findings