SkillAgentSearch skills...

Minisora

MiniSora: A community aims to explore the implementation path and future development direction of Sora.

Install / Use

/learn @mini-sora/Minisora
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

MiniSora Community

<!-- PROJECT SHIELDS -->

[![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url] [![Stargazers][stars-shield]][stars-url] <br />

<div align="center"> <a href="https://trendshift.io/repositories/8252" target="_blank"><img src="https://trendshift.io/api/badge/repositories/8252" alt="mini-sora%2Fminisora | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> </div> <!-- PROJECT LOGO --> <div align="center"> <img src="assets/logo.jpg" width="600"/> <div>&nbsp;</div> <div align="center"> </div> </div> <div align="center">

English | 简体中文

</div> <p align="center"> 👋 join us on <a href="https://cdn.vansin.top/minisora.jpg" target="_blank">WeChat</a> </p>

The MiniSora open-source community is positioned as a community-driven initiative organized spontaneously by community members. The MiniSora community aims to explore the implementation path and future development direction of Sora.

  • Regular round-table discussions will be held with the Sora team and the community to explore possibilities.
  • We will delve into existing technological pathways for video generation.
  • Leading the replication of papers or research results related to Sora, such as DiT (MiniSora-DiT), etc.
  • Conducting a comprehensive review of Sora-related technologies and their implementations, i.e., "From DDPM to Sora: A Review of Video Generation Models Based on Diffusion Models".

Hot News

empty

Reproduction Group of MiniSora Community

Sora Reproduction Goals of MiniSora

  1. GPU-Friendly: Ideally, it should have low requirements for GPU memory size and the number of GPUs, such as being trainable and inferable with compute power like 8 A100 80G cards, 8 A6000 48G cards, or RTX4090 24G.
  2. Training-Efficiency: It should achieve good results without requiring extensive training time.
  3. Inference-Efficiency: When generating videos during inference, there is no need for high length or resolution; acceptable parameters include 3-10 seconds in length and 480p resolution.

MiniSora-DiT: Reproducing the DiT Paper with XTuner

https://github.com/mini-sora/minisora-DiT

Requirements

We are recruiting MiniSora Community contributors to reproduce DiT using XTuner.

We hope the community member has the following characteristics:

  1. Familiarity with the OpenMMLab MMEngine mechanism.
  2. Familiarity with DiT.

Background

  1. The author of DiT is the same as the author of Sora.
  2. XTuner has the core technology to efficiently train sequences of length 1000K.

Support

  1. Computational resources: 2*A100.
  2. Strong supports from XTuner core developer P佬@pppppM.

Recent round-table Discussions

Paper Interpretation of Stable Diffusion 3 paper: MM-DiT

Speaker: MMagic Core Contributors

Live Streaming Time: 03/12 20:00

Highlights: MMagic core contributors will lead us in interpreting the Stable Diffusion 3 paper, discussing the architecture details and design principles of Stable Diffusion 3.

PPT: FeiShu Link

<!-- Please scan the QR code with WeChat to book a live video session. <div align="center"> <img src="assets/SD3论文领读.png" width="100"/> <div>&nbsp;</div> <div align="center"> </div> </div> -->

Highlights from Previous Discussions

Night Talk with Sora: Video Diffusion Overview

ZhiHu Notes: A Survey on Generative Diffusion Model: An Overview of Generative Diffusion Models

Paper Reading Program

Recruitment of Presenters

Related Work

| <h3 id="diffusion-models">01 Diffusion Models</h3> | | | :------------- | :------------- | | Paper | Link | | 1) Guided-Diffusion: Diffusion Models Beat GANs on Image Synthesis | NeurIPS 21 Paper, GitHub| | 2) Latent Diffusion: High-Resolution Image Synthesis with Latent Diffusion Models | CVPR 22 Paper, GitHub | | 3) EDM: Elucidating the Design Space of Diffusion-Based Generative Models | NeurIPS 22 Paper, GitHub | | 4) DDPM: Denoising Diffusion Probabilistic Models | NeurIPS 20 Paper, GitHub | | 5) DDIM: Denoising Diffusion Implicit Models | ICLR 21 Paper, GitHub | | 6) Score-Based Diffusion: Score-Based Generative Modeling through Stochastic Differential Equations | ICLR 21 Paper, GitHub, Blog | | 7) Stable Cascade: Würstchen: An efficient architecture for large-scale text-to-image diffusion models | ICLR 24 Paper, GitHub, Blog | | 8) Diffusion Models in Vision: A Survey| [TPAMI 23 Paper](ht

Related Skills

View on GitHub
GitHub Stars1.3k
CategoryDevelopment
Updated7d ago
Forks149

Languages

Python

Security Score

100/100

Audited on Mar 22, 2026

No findings