CoMA
CoMA: Compositional Human Motion Generation with Multi-modal Agents
Install / Use
/learn @Siwensun/CoMAREADME
CoMA: Compositional Human Motion Generation with Multi-modal Agents
CoMA: Compositional Human Motion Generation with Multi-modal Agents
Shanlin Sun*, Gabriel De Araujo*, Jiaqi Xu*, Shenghan Zhou*, Hanwen Zhang, Ziheng Huang, Chenyu You and Xiaohui Xie
(* Equal Contribution)
- Presented by University of California, Irvine; Southeast University; Chongqing University; Huazhong University of Science and Technology; Northeastern University; Stony Brook University
- :mailbox_with_mail: Primary contact: Shanlin Sun ( shanlins@uci.edu )
Highlights <a name="highlights"></a>
:star2: CoMA, a compositional human motion generation framework with multi-modal agents.
:star2: CoMA can generate high-quality motion sequences given long, complex and context-rich text prompts.

📰 News
📝 TODO List
- [ ] Release CoMA full implementation.
- [ ] Release MVC training code.
- [ ] Release SPAM training code.
- [ ] Release MVC inference code and checkpoints.
- [ ] Release SPAM inference code and checkpoints.
