SkillAgentSearch skills...

FramePack

Lets make video diffusion practical!

Install / Use

/learn @lllyasviel/FramePack
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="https://github.com/user-attachments/assets/2cc030b4-87e1-40a0-b5bf-1b7d6b62820b" width="300"> </p>

FramePack

Official implementation and desktop software for "Frame Context Packing and Drift Prevention in Next-Frame-Prediction Video Diffusion Models".

Links: Paper, Project Page

FramePack is a next-frame (next-frame-section) prediction neural network structure that generates videos progressively.

FramePack compresses input contexts to a constant length so that the generation workload is invariant to video length.

FramePack can process a very large number of frames with 13B models even on laptop GPUs.

FramePack can be trained with a much larger batch size, similar to the batch size for image diffusion training.

Video diffusion, but feels like image diffusion.

News

2025 July 14: Some pure text2video anti-drifting stress-test results of FramePack-P1 are uploaded here, using common prompts without any reference images.

2025 June 26: Some results of FramePack-P1 are uploaded here. The FramePack-P1 will be the next version of FramePack with two designs: Planned Anti-Drifting and History Discretization.

2025 May 03: The FramePack-F1 is released. Try it here.

Note that this GitHub repository is the only official FramePack website. We do not have any web services. All other websites are spam and fake, including but not limited to framepack.co, frame_pack.co, framepack.net, frame_pack.net, framepack.ai, frame_pack.ai, framepack.pro, frame_pack.pro, framepack.cc, frame_pack.cc,framepackai.co, frame_pack_ai.co, framepackai.net, frame_pack_ai.net, framepackai.pro, frame_pack_ai.pro, framepackai.cc, frame_pack_ai.cc, and so on. Again, they are all spam and fake. Do not pay money or download files from any of those websites.

Requirements

Note that this repo is a functional desktop software with minimal standalone high-quality sampling system and memory management.

Start with this repo before you try anything else!

Requirements:

  • Nvidia GPU in RTX 30XX, 40XX, 50XX series that supports fp16 and bf16. The GTX 10XX/20XX are not tested.
  • Linux or Windows operating system.
  • At least 6GB GPU memory.

To generate 1-minute video (60 seconds) at 30fps (1800 frames) using 13B model, the minimal required GPU memory is 6GB. (Yes 6 GB, not a typo. Laptop GPUs are okay.)

About speed, on my RTX 4090 desktop it generates at a speed of 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache). On my laptops like 3070ti laptop or 3060 laptop, it is about 4x to 8x slower. Troubleshoot if your speed is much slower than this.

In any case, you will directly see the generated frames since it is next-frame(-section) prediction. So you will get lots of visual feedback before the entire video is generated.

Installation

Windows:

>>> Click Here to Download One-Click Package (CUDA 12.6 + Pytorch 2.6) <<<

After you download, you uncompress, use update.bat to update, and use run.bat to run.

Note that running update.bat is important, otherwise you may be using a previous version with potential bugs unfixed.

image

Note that the models will be downloaded automatically. You will download more than 30GB from HuggingFace.

Linux:

We recommend having an independent Python 3.10.

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt

To start the GUI, run:

python demo_gradio.py

Note that it supports --share, --port, --server, and so on.

The software supports PyTorch attention, xformers, flash-attn, sage-attention. By default, it will just use PyTorch attention. You can install those attention kernels if you know how.

For example, to install sage-attention (linux):

pip install sageattention==1.0.6

However, you are highly recommended to first try without sage-attention since it will influence results, though the influence is minimal.

GUI

ui

On the left you upload an image and write a prompt.

On the right are the generated videos and latent previews.

Because this is a next-frame-section prediction model, videos will be generated longer and longer.

You will see the progress bar for each section and the latent preview for the next section.

Note that the initial progress may be slower than later diffusion as the device may need some warmup.

Sanity Check

Before trying your own inputs, we highly recommend going through the sanity check to find out if any hardware or software went wrong.

Next-frame-section prediction models are very sensitive to subtle differences in noise and hardware. Usually, people will get slightly different results on different devices, but the results should look overall similar. In some cases, if possible, you'll get exactly the same results.

Image-to-5-seconds

Download this image:

<img src="https://github.com/user-attachments/assets/f3bc35cf-656a-4c9c-a83a-bbab24858b09" width="150">

Copy this prompt:

The man dances energetically, leaping mid-air with fluid arm swings and quick footwork.

Set like this:

(all default parameters, with teacache turned off) image

The result will be:

<table> <tr> <td align="center" width="300"> <video src="https://github.com/user-attachments/assets/bc74f039-2b14-4260-a30b-ceacf611a185" controls style="max-width:100%;"> </video> </td> </tr> <tr> <td align="center"> <em>Video may be compressed by GitHub</em> </td> </tr> </table>

Important Note:

Again, this is a next-frame-section prediction model. This means you will generate videos frame-by-frame or section-by-section.

If you get a much shorter video in the UI, like a video with only 1 second, then it is totally expected. You just need to wait. More sections will be generated to complete the video.

Know the influence of TeaCache and Quantization

Download this image:

<img src="https://github.com/user-attachments/assets/42293e30-bdd4-456d-895c-8fedff71be04" width="150">

Copy this prompt:

The girl dances gracefully, with clear movements, full of charm.

Set like this:

image

Turn off teacache:

image

You will get this:

<table> <tr> <td align="center" width="300"> <video src="https://github.com/user-attachments/assets/04ab527b-6da1-4726-9210-a8853dda5577" controls style="max-width:100%;"> </video> </td> </tr> <tr> <td align="center"> <em>Video may be compressed by GitHub</em> </td> </tr> </table>

Now turn on teacache:

image

About 30% users will get this (the other 70% will get other random results depending on their hardware):

<table> <tr> <td align="center" width="300"> <video src="https://github.com/user-attachments/assets/149fb486-9ccc-4a48-b1f0-326253051e9b" controls style="max-width:100%;"> </video> </td> </tr> <tr> <td align="center"> <em>A typical worse result.</em> </td> </tr> </table>

So you can see that teacache is not really lossless and sometimes can influence the result a lot.

We recommend using teacache to try ideas and then using the full diffusion process to get high-quality results.

This recommendation also applies to sage-attention, bnb quant, gguf, etc., etc.

Image-to-1-minute

<img src="https://github.com/user-attachments/assets/820af6ca-3c2e-4bbc-afe8-9a9be1994ff5" width="150">

The girl dances gracefully, with clear movements, full of charm.

image

Set video length to 60 seconds:

image

If everything is in order you will get some result like this eventually.

60s version:

<table> <tr> <td align="center" width="300"> <video src="https://github.com/user-attachments/assets/c3be4bde-2e33-4fd4-b76d-289a036d3a47" controls style="max-width:100%;"> </video> </td> </tr> <tr> <td align="center"> <em>Video may be compressed by GitHub</em> </td> </tr> </table>

6s version:

<table> <tr> <td align="center" width="300"> <video src="https://github.com/user-attachments/assets/37fe2c33-cb03-41e8-acca-920ab3e34861" controls style="max-width:100%;"> </video> </td> </tr> <tr> <td align="center"> <em>Video may be compressed by GitHub</em> </td> </tr> </table>

More Examples

Many more examples are in Project Page.

Below are some more examples that you may be interested in reproducing.


<img src="https://github.com/user-attachments/assets/99f4d281-28ad-44f5-8700-aa7a4e5638fa" width="150">

The girl dances gracefully, with clear movements, full of charm.

![imag

View on GitHub
GitHub Stars16.7k
CategoryContent
Updated30m ago
Forks1.7k

Languages

Python

Security Score

95/100

Audited on Apr 2, 2026

No findings