ConsisID
[CVPR 2025 Highlight🔥] Identity-Preserving Text-to-Video Generation by Frequency Decomposition
Install / Use
/learn @PKU-YuanGroup/ConsisIDREADME
Open-Sora Plan: Open-Source Large Video Generation Model <br> Bin Lin, Yunyang Ge and Xinhua Cheng etc. <br>
![]()
![]()
<br>
Helios: Real Real-Time Long Video Generation Model <br> Shenghai Yuan, Jinfa Huang and Xianyi He etc. <br>
![]()
![]()
<br>
OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation <br> Shenghai Yuan, Xianyi He and Yufan Deng etc. <br>
![]()
![]()
<br>
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators <br> Shenghai Yuan, Jinfa Huang and Yujun Shi etc. <br>
![]()
![]()
<br>
ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation <br> Shenghai Yuan, Jinfa Huang and Yongqi Xu etc. <br>
</p></details>![]()
![]()
<br>
📣 News
- ⏳⏳⏳ Release the full code & datasets & weights.
[2026.03.08]👋 We introduce Helios, a breakthrough video generation model that achieves minute-scale, high-quality video synthesis at 19.5 FPS on a single H100 GPU — without relying on conventional long video anti-drifting strategies or standard video acceleration techniques. Welcome to check Technical Report![2025.08.30]🚀 Thanks for the excellent work DanceGRPO on transferring ConsisID data for I2V RL training, please refer to here for more details. Similarly, you can also try using OpenS2V-5M for RL training.[2025.05.27]🔥 Introducing OpenS2V-Nexus, which consists of: (i) OpenS2V-Eval, a fine-grained benchmark, and (ii) OpenS2V-5M, a million-scale dataset. Welcome to try it![2025.04.04]🔥 Breaking news! ConsisID has been recommended as CVPR Highlight.[2025.03.27]🔥 We have updated our technical report. Please click here to view it.[2025.02.27]🔥 ConsisID has been accepted by CVPR 2025, and we will update arXiv with more details soon. Stay tuned![2025.02.16]🔥 We have adapted the code for CogVideoX1.5, and you can use our code not only for training ConsisID but also for the CogVideoX-series.[2025.01.19]🤗 Thanks @arrow, @yiyixuxu, @hlky and @stevhliu, ConsisID will be merged into diffusers in0.33.0. So for now, please usepip install git+https://github.com/huggingface/diffusers.gitto install diffusers dev version. And we have reorganized the code and weight configs, so it's better to update your local files if you have cloned them previously.[2024.12.26]🚀 We release the cache inference code for ConsisID powered by TeaCache. Thanks @LiewFeng for his help.[2024.12.24]🚀 We release the parallel inference code for ConsisID powered by xDiT. Thanks @feifeibear for his help.[2024.12.09]🔥We release the test set and metric calculation code used in the paper, now your can measure the metrics on your own machine. Please refer to this guide for more details.[2024.12.08]🔥The code for <u>data preprocessing</u> is out, which is used to obtain the training data required by ConsisID, supporting multi-id annotation. Please refer to this guide for more details.[2024.12.04]Thanks @shizi for providing 🤗Windows-ConsisID and 🟣Windows-ConsisID, which make it easy to run ConsisID on Windows.[2024.12.01]🔥 We provide full text prompts corresponding to all the videos on project page. Click here to get and try the demo.[2024.11.30]🤗 We have fixed the huggingface demo, welcome to try it.[2024.11.29]🏃♂️ The current code and weights are our early versions, and the differences with the latest version in arxiv can be viewed here. And we will release the full code in the next few days.[2024.11.28]Thanks @camenduru for providing [Jupyter Notebook](https://colab.research.google.com/github/camenduru/ConsisID-jupyter/blob/main/ConsisID_jupyter.ipynb
