FlowBack
[AAAI 2026] Flowing Backwards: Improving Normalizing Flows via Reverse Representation Alignment
Install / Use
/learn @MCG-NJU/FlowBackREADME
Flowing Backwards: Improving Normalizing Flows via Reverse Representation Alignment
Authors: Yang Chen, Xiaowei Xu, Shuai Wang, Chenhui Zhu, Ruxue Wen, Xubin Li, Tiezheng Ge, Limin Wang
<p align="center"> 📧 Primary Contact: yang-chen@smail.nju.edu.cn </p> <p align="center"> <a href="https://arxiv.org/abs/2511.22345" target='_blank'> <img alt="Static Badge" src="https://img.shields.io/badge/arXiv-2511.22345-b31b1b?style=flat-square"> </a> </p>:mag: Overview
This repository hosts the official open-source implementation for FlowBack.
FlowBack introduces a novel alignment strategy for Normalizing Flows (NFs). It works by aligning the features of the generative (reverse) pass with those from a pretrained vision encoder. This approach yields significant improvements in both generative quality and classification accuracy.
FlowBack is a joint project by Nanjing University and Alibaba Group.

🌟 Features
- 🚀 Model Training: Easily train our model from scratch on your own datasets.
- 📊 FID Evaluation: Calculate the Fréchet Inception Distance (FID) to measure image generation quality.
- 🎯 Training-Free Classification: Reproduce the paper's classification accuracy metric.
- ✨ Linear Probing: Evaluate the quality of learned representations through linear probing on intermediate features.
⚙️ Installation
-
Clone the repository:
git clone https://github.com/MCG-NJU/FlowBack.git cd FlowBack -
Create a virtual environment (recommended):
conda create -n flowback python=3.10 conda activate flowback pip install -r requirements.txt
🚀 Usage
This section outlines the main workflows for using this repository: training a new model and evaluating it using various metrics.
1. Model Training
To train the model, use the provided training script train.sh script.
bash scripts/train.sh
2. Evaluation
After training, you can evaluate your model using the following metrics.
📊 a) FID Score Evaluation
To evaluate the Fréchet Inception Distance (FID), first, you need to generate the pre-computed statistics file for the target dataset (e.g., ImageNet).
-
Prepare FID stats:
python prepare_fid_stats.py --data /path/to/imagenet/ -
Calculate FID:
bash scripts/local-eval.sh
🎯 b) Classification Accuracy
To compute the classification accuracy metric as proposed in our paper, run the classify.sh script with your trained model checkpoint.
bash scripts/classify.sh
✨ c) Linear Probing
Linear probing is a two-step process to evaluate the quality of the model's internal representations.
-
Step 1: Cache Features First, use
cache_repa_features.pyto extract and save the intermediate features from your trained model.accelerate launch cache_repa_features.py -
Step 2: Train Linear Probe Once the features are cached, run
lp.pyto train a linear classifier on top of these features and compute the final classification accuracy.accelerate launch lp.py
:bouquet: Acknowledgements
This project is built upon TARFlow and FlowDCN. Thanks to the contributors of these great codebases.
Related Skills
qqbot-channel
345.9kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.0k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
345.9kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
2.9kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
