SkillAgentSearch skills...

CoMA

CoMA: Compositional Human Motion Generation with Multi-modal Agents

Install / Use

/learn @Siwensun/CoMA
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

CoMA: Compositional Human Motion Generation with Multi-modal Agents

Project Page arXiv

CoMA: Compositional Human Motion Generation with Multi-modal Agents

Shanlin Sun*, Gabriel De Araujo*, Jiaqi Xu*, Shenghan Zhou*, Hanwen Zhang, Ziheng Huang, Chenyu You and Xiaohui Xie

(* Equal Contribution)

  • Presented by University of California, Irvine; Southeast University; Chongqing University; Huazhong University of Science and Technology; Northeastern University; Stony Brook University
  • :mailbox_with_mail: Primary contact: Shanlin Sun ( shanlins@uci.edu )

Highlights <a name="highlights"></a>

:star2: CoMA, a compositional human motion generation framework with multi-modal agents.

:star2: CoMA can generate high-quality motion sequences given long, complex and context-rich text prompts.

📰 News

📝 TODO List

  • [ ] Release CoMA full implementation.
  • [ ] Release MVC training code.
  • [ ] Release SPAM training code.
  • [ ] Release MVC inference code and checkpoints.
  • [ ] Release SPAM inference code and checkpoints.
View on GitHub
GitHub Stars14
CategoryDevelopment
Updated1mo ago
Forks1

Languages

Python

Security Score

90/100

Audited on Feb 10, 2026

No findings