Copart
CoPart (ICCV 2025): A part-based 3D generation framework & the first large-scale part-level 3D dataset.
Install / Use
/learn @hkdsc/CopartREADME
From One to More: Contextual Part Latents for 3D Generation (ICCV 2025)
arXiv Preprint | Project | Dataset
</div>We present a new part-based 3D generation framework, CoPart, which represents a 3D object with multiple contextual part latents and simultaneously generates coherent 3D parts. And we are pleased to release the first large-scale 3D object part dataset PartVerse that has been manually annotated.

We follow the pipeline of “raw data - mesh segment algorithm - human post correction -
generate text caption” to produce part-level data.
Download & Usage
You can download the PartVerse dataset from Google Drive, Huggingface. The data directory after decompressing the dataset should be as follows:
dataset/
├── textured_part_glbs/
├── normalized_glbs/
├── anno_infos/
└── text_captions.json
textureld_part_glbscontains textured 3D meshes for each decomposed part of the objects. Each file is stored in the GLB format.normalized_glbsprovides the complete, normalized 3D mesh of each object in GLB format. These are aligned with the part-level meshes and can be used for holistic shape analysis or comparison.anno_infosprovides files that can be used for generating auxiliary information of parts.text_captions.jsonstores descriptive text captions for each part, automatically generated using a Vision-Language Model (VLM).
Due to the large number of parts in some objects, we can discard some unimportant parts (such as a screw, etc.). We provide partverse/get_infos.py to process the data. By running it, you can obtain (1) some statistical information of the parts, (2) the priority of discarding them, (3) view of max overlap between full object and parts render. Please install nvdiffrast and kaolin when use.
python partverse/get_infos.py --data_root ${DATA_PATH} --global_info_save_path ${SAVE_PATH} --max_visible_info_save_path ${SAVE_PATH}
We provide rendering script following TRELLIS. You can use partverse/render_parts.py to render textured_part_glbs (part objects) and partverse/render_dir.py to render normalized_glbs (whole objects), e.g.,
python partverse/render_parts.py --textured_part_glbs_root ${PART_GLB_PATH} --out_dir ${OUT_PATH} --num_views 8 --elevation 30
In addition, we also provide text caption code to facilitate users in customizing text prompts for their own models. For the VLM, we use Qwen2.5-VL-32B now. You can replace to any VLM.
python partverse/get_text_caption.py --raw_img_root ${FULL_OBJECT_IMG_PATH} --part_img_root ${PART_IMG_PATH} --info_file ${MAX_VIS_INFO_PATH} --output_file ${OUT_PATH} --vlm_ckpt_dir ${VLM_HF_DOWN_PATH}
🚩 News
- [2025/07/05] Our paper is accepted by ICCV 2025. Code is coming soon, stay tuned! 🔥
📖 Citation
@article{dong2025copart,
title={From One to More: Contextual Part Latents for 3D Generation},
author={Shaocong Dong, Lihe Ding, Xiao Chen, Yaokun Li, Yuxin WANG, Yucheng Wang, Qi WANG, Jaehyeok Kim, Chenjian Gao, Zhanpeng Huang, Zibin Wang, Tianfan Xue, Dan Xu},
booktitle={ICCV},
year={2025}
}
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
