Minisora
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
Install / Use
/learn @mini-sora/MinisoraREADME
MiniSora Community
<!-- PROJECT SHIELDS -->[![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url] [![Stargazers][stars-shield]][stars-url] <br />
<div align="center"> <a href="https://trendshift.io/repositories/8252" target="_blank"><img src="https://trendshift.io/api/badge/repositories/8252" alt="mini-sora%2Fminisora | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> </div> <!-- PROJECT LOGO --> <div align="center"> <img src="assets/logo.jpg" width="600"/> <div> </div> <div align="center"> </div> </div> <div align="center">English | 简体中文
</div> <p align="center"> 👋 join us on <a href="https://cdn.vansin.top/minisora.jpg" target="_blank">WeChat</a> </p>The MiniSora open-source community is positioned as a community-driven initiative organized spontaneously by community members. The MiniSora community aims to explore the implementation path and future development direction of Sora.
- Regular round-table discussions will be held with the Sora team and the community to explore possibilities.
- We will delve into existing technological pathways for video generation.
- Leading the replication of papers or research results related to Sora, such as DiT (MiniSora-DiT), etc.
- Conducting a comprehensive review of Sora-related technologies and their implementations, i.e., "From DDPM to Sora: A Review of Video Generation Models Based on Diffusion Models".
Hot News
- OpenAI Sora is coming out!
- Movie Gen: A Cast of Media Foundation Models
- Stable Diffusion 3: MM-DiT: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
- MiniSora-DiT: Reproducing the DiT Paper with XTuner
- Introduction of MiniSora and Latest Progress in Replicating Sora

Reproduction Group of MiniSora Community
Sora Reproduction Goals of MiniSora
- GPU-Friendly: Ideally, it should have low requirements for GPU memory size and the number of GPUs, such as being trainable and inferable with compute power like 8 A100 80G cards, 8 A6000 48G cards, or RTX4090 24G.
- Training-Efficiency: It should achieve good results without requiring extensive training time.
- Inference-Efficiency: When generating videos during inference, there is no need for high length or resolution; acceptable parameters include 3-10 seconds in length and 480p resolution.
MiniSora-DiT: Reproducing the DiT Paper with XTuner
https://github.com/mini-sora/minisora-DiT
Requirements
We are recruiting MiniSora Community contributors to reproduce DiT using XTuner.
We hope the community member has the following characteristics:
- Familiarity with the
OpenMMLab MMEnginemechanism. - Familiarity with
DiT.
Background
- The author of
DiTis the same as the author ofSora. - XTuner has the core technology to efficiently train sequences of length
1000K.
Support
Recent round-table Discussions
Paper Interpretation of Stable Diffusion 3 paper: MM-DiT
Speaker: MMagic Core Contributors
Live Streaming Time: 03/12 20:00
Highlights: MMagic core contributors will lead us in interpreting the Stable Diffusion 3 paper, discussing the architecture details and design principles of Stable Diffusion 3.
PPT: FeiShu Link
<!-- Please scan the QR code with WeChat to book a live video session. <div align="center"> <img src="assets/SD3论文领读.png" width="100"/> <div> </div> <div align="center"> </div> </div> -->Highlights from Previous Discussions
Night Talk with Sora: Video Diffusion Overview
ZhiHu Notes: A Survey on Generative Diffusion Model: An Overview of Generative Diffusion Models
Paper Reading Program
-
Technical Report: Video generation models as world simulators
-
Latte: Latte: Latent Diffusion Transformer for Video Generation
-
Stable Cascade (ICLR 24 Paper): Würstchen: An efficient architecture for large-scale text-to-image diffusion models
-
Stable Diffusion 3: MM-DiT: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
-
Updating...
Recruitment of Presenters
Related Work
- 01 Diffusion Model
- 02 Diffusion Transformer
- 03 Baseline Video Generation Models
- 04 Diffusion UNet
- 05 Video Generation
- 06 Dataset
- 6.1 Pubclic Datasets
- 6.2 Video Augmentation Methods
- 6.2.1 Basic Transformations
- 6.2.2 Feature Space
- 6.2.3 GAN-based Augmentation
- 6.2.4 Encoder/Decoder Based
- 6.2.5 Simulation
- 07 Patchifying Methods
- 08 Long-context
- 09 Audio Related Resource
- 10 Consistency
- 11 Prompt Engineering
- 12 Security
- 13 World Model
- 14 Video Compression
- 15 Mamba
- 16 Existing high-quality resources
- 17 Efficient Training
- 17.1 Parallelism based Approach
- 17.1.1 Data Parallelism (DP)
- 17.1.2 Model Parallelism (MP)
- 17.1.3 Pipeline Parallelism (PP)
- 17.1.4 Generalized Parallelism (GP)
- 17.1.5 ZeRO Parallelism (ZP)
- 17.2 Non-parallelism based Approach
- 17.2.1 Reducing Activation Memory
- 17.2.2 CPU-Offloading
- 17.2.3 Memory Efficient Optimizer
- 17.3 Novel Structure
- 17.1 Parallelism based Approach
- 18 Efficient Inference
- 18.1 Reduce Sampling Steps
- 18.1.1 Continuous Steps
- 18.1.2 Fast Sampling
- 18.1.3 Step distillation
- 18.2 Optimizing Inference
- 18.2.1 Low-bit Quantization
- 18.2.2 Parallel/Sparse inference
- 18.1 Reduce Sampling Steps
| <h3 id="diffusion-models">01 Diffusion Models</h3> | | | :------------- | :------------- | | Paper | Link | | 1) Guided-Diffusion: Diffusion Models Beat GANs on Image Synthesis | NeurIPS 21 Paper, GitHub| | 2) Latent Diffusion: High-Resolution Image Synthesis with Latent Diffusion Models | CVPR 22 Paper, GitHub | | 3) EDM: Elucidating the Design Space of Diffusion-Based Generative Models | NeurIPS 22 Paper, GitHub | | 4) DDPM: Denoising Diffusion Probabilistic Models | NeurIPS 20 Paper, GitHub | | 5) DDIM: Denoising Diffusion Implicit Models | ICLR 21 Paper, GitHub | | 6) Score-Based Diffusion: Score-Based Generative Modeling through Stochastic Differential Equations | ICLR 21 Paper, GitHub, Blog | | 7) Stable Cascade: Würstchen: An efficient architecture for large-scale text-to-image diffusion models | ICLR 24 Paper, GitHub, Blog | | 8) Diffusion Models in Vision: A Survey| [TPAMI 23 Paper](ht
Related Skills
node-connect
340.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
340.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.1kCommit, push, and open a PR
