FAR
Frequency Autoregressive Image Generation with Continuous Tokens
Install / Use
/learn @yuhuUSTC/FARREADME
Frequency Autoregressive Image Generation with Continuous Tokens <br><sub>Official PyTorch Implementation</sub>
<p align="center"> <img src="demo/Visual_ImageNet.png" width="720"> </p>📰 News
- [2025-3-7] We release the code and checkpoint of
FARfor class-to-image generation on ImageNet dataset. - [2025-3-7] The tech report of
FARis available.
Preparation
Installation
Download the code:
git clone https://github.com/yuhuUSTC/FAR.git
cd FAR
A suitable conda environment named far can be created and activated with:
conda env create -f environment.yaml
conda activate far
Dataset
Download ImageNet dataset, and place it in your IMAGENET_PATH.
Pretrained Weights
Download pre-trained VAE, and place it in /pretrained/vae/.
Download .npz of ImageNet 256x256 for calculating the FID metric, and place it in /fid_stats/.
Download the weights of FAR_B, and place it in /pretrained_models/far/far_base/.
Download the weights of FAR_L, and place it in /pretrained_models/far/far_large/.
Download the weights of FAR_H, and place it in /pretrained_models/far/far_huge/.
Download the weights of FAR_T2I, and place it in pretrained_models/far/far_t2i/.
For convenience, our pre-trained MAR models can be downloaded directly here as well:
| MAR Model | FID-50K | Inception Score | #params | |------------------------------------------------------------------------|---------|-----------------|---------| | FAR-B | 4.83 | 247.4 | 208M | | FAR-L | 3.92 | 288.9 | 451M | | FAR-H | 3.71 | 304.9 | 812M |
(Optional) Caching VAE Latents
Given that our data augmentation consists of simple center cropping and random flipping,
the VAE latents can be pre-computed and saved to CACHED_PATH to save computations during MAR training:
torchrun --nproc_per_node=8 --nnodes=1 --node_rank=0 \
main_cache.py \
--img_size 256 --vae_path pretrained_models/vae/kl16.ckpt --vae_embed_dim 16 \
--batch_size 128 \
--data_path ${IMAGENET_PATH} --cached_path ${CACHED_PATH}
FAR Framework
<p align="center"> <img src="demo/FAR_framework.png" width="720"> </p>Training (ImageNet 256x256)
Run the following command, which contains the scripts for training various model size (FAR-B, FAR-L, FAR-H).
bash train.sh
Specifically, take the default script for FAR-L for example:
torchrun --nproc_per_node=8 --nnodes=4 --node_rank=${NODE_RANK} --master_addr=${MASTER_ADDR} --master_port=${MASTER_PORT} \
main_far.py \
--img_size 256 --vae_path pretrained_models/vae/kl16.ckpt --vae_embed_dim 16 --vae_stride 16 --patch_size 1 \
--model far_large --diffloss_d 3 --diffloss_w 1024 \
--epochs 400 --warmup_epochs 100 --batch_size 64 --blr 1.0e-4 --diffusion_batch_mul 4 \
--output_dir ${OUTPUT_DIR} --resume ${OUTPUT_DIR} \
--data_path ${IMAGENET_PATH}
- (Optional) Add
--online_evalto evaluate FID during training (every 40 epochs). - (Optional) To enable uneven loss weight strategy, add
--loss_weightto the arguments. - (Optional) To train with cached VAE latents, add
--use_cached --cached_path ${CACHED_PATH}to the arguments.
Evaluation (ImageNet 256x256)
Run the following command, which contains the scripts for the inference of various model size (FAR-B, FAR-L, FAR-H).
bash samle.sh
Specifically, take the default inference script for FAR-L for example:
torchrun --nnodes=1 --nproc_per_node=8 main_far.py \
--img_size 256 --vae_path pretrained/vae_mar/kl16.ckpt --vae_embed_dim 16 --vae_stride 16 --patch_size 1 \
--model far_large --diffloss_d 3 --diffloss_w 1024 \
--eval_bsz 32 --num_images 1000 \
--num_iter 10 --num_sampling_steps 100 --cfg 3.0 --cfg_schedule linear --temperature 1.0 \
--output_dir pretrained_models/far/far_large \
--resume pretrained_models/far/far_large \
--data_path ${IMAGENET_PATH} --evaluate
- Add
--maskto increase the generation diversity. - We adopt 10 autoregressive steps by default.
- Generation speed can be further increased by reducing the number of diffusion steps (e.g.,
--num_sampling_steps 50).
Training (T2I)
Script for the default setting:
torchrun --nproc_per_node=8 --nnodes=4 --node_rank=${NODE_RANK} --master_addr=${MASTER_ADDR} --master_port=${MASTER_PORT} \
main_far_t2i.py \
--img_size 256 --vae_path pretrained/vae_mar/kl16.ckpt --vae_embed_dim 16 --vae_stride 16 --patch_size 1 \
--model far_t2i --diffloss_d 3 --diffloss_w 1024 \
--epochs 400 --warmup_epochs 100 --batch_size 64 --blr 1.0e-4 --diffusion_batch_mul 4 \
--output_dir ${OUTPUT_DIR} --resume ${OUTPUT_DIR} \
--text_model_path pretrained/Qwen2-VL-1.5B-Instruct \
--data_path ${T2I_PATH}
- The
text encoderemploys Qwen2-VL-1.5B, download it and place it in yourpretrained/Qwen2-VL-1.5B-Instruct/. - Replace
T2I_PATHwith the path to your Text-to-image dataset path.
Evaluation (T2I)
Script for the default setting:
torchrun --nnodes=1 --nproc_per_node=8 main_far_t2i.py \
--img_size 256 --vae_path pretrained/vae_mar/kl16.ckpt --vae_embed_dim 16 --vae_stride 16 --patch_size 1 \
--model far_t2i --diffloss_d 3 --diffloss_w 1024 \
--eval_bsz 32 \
--num_iter 10 --num_sampling_steps 100 --cfg 3.0 --cfg_schedule linear --temperature 1.0 \
--output_dir pretrained_models/far/far_t2i \
--resume pretrained_models/far/far_t2i \
--text_model_path pretrained/Qwen2-VL-1.5B-Instruct \
--data_path ${T2I_PATH} --evaluate
- Add
--maskto increase the generation diversity. - We adopt 10 autoregressive steps by default.
- Generation speed can be further increased by reducing the number of diffusion steps (e.g.,
--num_sampling_steps 50).
Acknowledgements
A large portion of codes in this repo is based on MAE, and MAR. Thanks for these great work and open source。
Contact
If you have any questions, feel free to contact me through email (yuhu520@mail.ustc.edu.cn). Enjoy!
Citation
@article{yu2025frequency,
author = {Hu Yu and Hao Luo and Hangjie Yuan and Yu Rong and Feng Zhao},
title = {Frequency Autoregressive Image Generation with Continuous Tokens},
journal = {arxiv: 2503.05305},
year = {2025}
}
Related Skills
qqbot-channel
352.2kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.6k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
352.2kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
3.1kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
