PiFlow
[ICLR 2026] pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation
Install / Use
/learn @Lakonik/PiFlowREADME
pi-Flow: Policy-Based Flow Models
Official PyTorch implementation of the paper:
pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation <br> Hansheng Chen<sup>1</sup>, Kai Zhang<sup>2</sup>, Hao Tan<sup>2</sup>, Leonidas Guibas<sup>1</sup>, Gordon Wetzstein<sup>1</sup>, Sai Bi<sup>2</sup><br> <sup>1</sup>Stanford University, <sup>2</sup>Adobe Research <br> arXiv | ComfyUI | pi-Qwen Demo🤗 | pi-FLUX Demo🤗 | pi-FLUX.2 Demo🤗
<img src="assets/stanford_adobe_logos.png" width="400" alt=""/> <img src="assets/teaser.jpg" alt=""/>🔥News
-
[Dec 12, 2025] pi-FLUX.2 is now available for 4-step image generation and editing! Check out the pi-FLUX.2 Demo🤗. Please re-install the latest version of LakonLab (this repository) to use pi-FLUX.2.
-
[Nov 7, 2025] ComfyUI-piFlow is now available! Supports 4-step sampling of Qwen-Image and Flux.1 dev using 8-bit models on a single consumer-grade GPU, powered by ComfyUI.
Highlights
-
Novel Framework: pi-Flow stands for policy-based flow models. The network does not output a denoised state; instead, it outputs a fast policy that rolls out multiple ODE substeps to reach the denoised state.
<img src="assets/piflow_framework_comparison.png" width="1000" alt=""/> -
Simple Distillation: pi-Flow adopts policy-based imitation distillation (pi-ID). No JVPs, no auxiliary networks, no GANs—just a single L2 loss between the policy and the teacher.
<img src="assets/piid.png" width="1000" alt=""/> -
Diversity and Teacher Alignment: pi-Flow mitigates the quality–diversity trade-off, generating highly diverse samples while maintaining high quality. It also remains highly faithful to the teacher’s style. The example below shows that pi-Flow samples generally align with the teacher’s outputs and exhibit significantly higher diversity than those from DMD students (e.g., SenseFlow, Qwen-Image Lightning).
<img src="assets/diversity_comparison.jpg" width="1000" alt=""/> -
Texture Details: pi-Flow excels in generating fine-grained texture details. When using additional photorealistic style LoRAs, this advantage becomes very prominent, as shown in the comparison below (zoom in for best view).
<img src="assets/piflow_dmd_texture_comparison.jpg" width="1000" alt=""/> -
Scalability: pi-Flow scales from ImageNet DiT to 20-billion-parameter text-to-image models (Qwen-Image). This codebase is highly optimized for large-scale experiments. See the Codebase section for details.
Installation
The code has been tested in the following environment:
- Linux (tested on Ubuntu 20 and above)
- PyTorch 2.6
With the above prerequisites, run pip install -e . --no-build-isolation from the repository root to install the LakonLab codebase and its dependencies.
An example of installation commands is shown below:
# Create conda environment
conda create -y -n piflow python=3.10 ninja
conda activate piflow
# Install Pytorch. Goto https://pytorch.org/get-started/previous-versions/ to select the appropriate version
pip install torch==2.6.0 torchvision==0.21.0
# Move to this repository (the folder with setup.py) after cloning
cd <PATH_TO_YOUR_LOCAL_REPO>
# Install LakonLab in editable mode
pip install -e . --no-build-isolation
Additional notes:
- To access FLUX models, please accept the conditions here, and then run
huggingface-cli loginto login with your HuggingFace account. - Optionally, if you would like to use AWS S3 for dataset and checkpoint storage, please also install the AWS CLI.
- This codebase may work on Windows systems, but it has not been tested extensively.
Inference: Diffusers Pipelines
We provide diffusers pipelines for easy inference. The following code demonstrates how to sample images from the distilled Qwen-Image and FLUX models.
4-NFE GM-Qwen (GMFlow Policy)
Note: GM-Qwen supports elastic inference. Feel free to set num_inference_steps to any value above 4.
import torch
from lakonlab.models.diffusions.schedulers import FlowMapSDEScheduler
from lakonlab.pipelines.pipeline_piqwen import PiQwenImagePipeline
pipe = PiQwenImagePipeline.from_pretrained(
'Qwen/Qwen-Image',
torch_dtype=torch.bfloat16)
adapter_name = pipe.load_piflow_adapter( # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs)
'Lakonik/pi-Qwen-Image',
subfolder='gmqwen_k8_piid_4step',
target_module_name='transformer')
pipe.scheduler = FlowMapSDEScheduler.from_config( # use fixed shift=3.2
pipe.scheduler.config, shift=3.2, use_dynamic_shifting=False, final_step_size_scale=0.5)
pipe = pipe.to('cuda')
out = pipe(
prompt='Photo of a coffee shop entrance featuring a chalkboard sign reading "π-Qwen Coffee 😊 $2 per cup," with a neon '
'light beside it displaying "π-通义千问". Next to it hangs a poster showing a beautiful Chinese woman, '
'and beneath the poster is written "e≈2.71828-18284-59045-23536-02874-71352".',
width=1920,
height=1080,
num_inference_steps=4,
generator=torch.Generator().manual_seed(42),
).images[0]
out.save('gmqwen_4nfe.png')
<img src="assets/gmqwen_4nfe.png" width="600" alt=""/>
4-NFE GM-FLUX (GMFlow Policy)
Note: For the 8-NFE version, replace gmflux_k8_piid_4step with gmflux_k8_piid_8step and set num_inference_steps=8.
import torch
from lakonlab.models.diffusions.schedulers import FlowMapSDEScheduler
from lakonlab.pipelines.pipeline_piflux import PiFluxPipeline
pipe = PiFluxPipeline.from_pretrained(
'black-forest-labs/FLUX.1-dev',
torch_dtype=torch.bfloat16)
adapter_name = pipe.load_piflow_adapter( # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs)
'Lakonik/pi-FLUX.1',
subfolder='gmflux_k8_piid_4step',
target_module_name='transformer')
pipe.scheduler = FlowMapSDEScheduler.from_config( # use fixed shift=3.2
pipe.scheduler.config, shift=3.2, use_dynamic_shifting=False, final_step_size_scale=0.5)
pipe = pipe.to('cuda')
out = pipe(
prompt='A portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the Sydney Opera House holding a sign on the chest that says "Welcome Friends"',
width=1360,
height=768,
num_inference_steps=4,
generator=torch.Generator().manual_seed(42),
).images[0]
out.save('gmflux_4nfe.png')
<img src="assets/gmflux_4nfe.png" width="600" alt=""/>
4-NFE DX-Qwen and DX-FLUX (DX Policy)
See example_dxqwen_pipeline.py and example_dxflux_pipeline.py for examples of using the DX policy.
4-NFE GM-FLUX.2 (GMFlow Policy)
See example_gmflux2_pipeline.py for an example of pi-FLUX.2 inference.
Note: GM-FLUX.2 also supports elastic inference. Feel free to set num_inference_steps to any value above 4.
Inference: Gradio Apps
We provide Gradio apps for interactive inference with the distilled GM-Qwen and GM-FLUX models. Official apps are available on HuggingFace Spaces: pi-Qwen Demo🤗 | pi-FLUX Demo🤗 | pi-FLUX.2 Demo🤗.
Run the following commands to launch the apps locally:
python demo/gradio_gmqwen.py --share # GM-Qwen elastic inference
python demo/gradio_gmflux.py --share # GM-FLUX 4-NFE and 8-NFE inference
python demo/gradio_gmflux2.py --share # GM-FLUX.2 elastic inference for image generation and editing
<img src="assets/gradio_apps.jpg" width="600" alt=""/>
Toy Models
To aid understanding, we provide minimal toy model training scripts that overfit the teacher behavior on a fixed initial noise using a static GMFlow policy (without student network).
Run the following command to distill a toy model from a ImageNet DiT (REPA):
python demo/train_piflow_dit_imagenet_toymodel.py
<img src="assets/piflow_dit_imagenet_toymodel.png" width="200" alt=""/>
Run the following command to distill a toy model from Qwen-Image (requires 40GB VRAM):
python demo/train_piflow_qwen_toymodel.py
<img src="assets/piflow_qwen_toymodel.png" width="600" alt=""/>
The results of these toy models demonstrate the expressiveness of the GMFlow policy—a GMFlow policy with 32 components can fit the entire ODE trajectory from $t=1$ to $t=0$, making it theoretically possible for 1-NFE generation. In practice, the bottleneck is often the student network, not the policy itself, thus more NFEs are still needed.
Training and Evaluation
Follow the instructions in the following links to reproduce the main results in the paper:
By default, checkpoints will be saved into checkpoints/, logs will be s
Related Skills
qqbot-channel
352.2kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.6k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
352.2kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
3.1kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
