FriedRiceLab
Official repository of the Fried Rice Lab, including code resources of the following our works: ESWT [arXiv], etc. This repository also implements many useful features and out-of-the-box image restoration models.
Install / Use
/learn @Fried-Rice-Lab/FriedRiceLabREADME
Fried Rice Lab
We will release code resources for our works here, including:
- ESWT [arXiv]
We also implement many useful features, including:
- Allow free combination of different models and tasks with new run commands (2 Run)
- Analyse the complexity of a specific model on a specific task (2.3 Analyse)
- Interpret super-resolution models using local attribute maps (LAM) (2.4 Interpret)
- Restore your own images using existing models (2.5 Infer)
- (New!) Measure representational similarity using minibatch centered kernel alignment (2.6 CKA)
- (New!) Calculate mean attention distance of self-attention (2.7 MAD)
- (New!) Combine multiple datasets as training set (Combine Dataset)
- Train/test models with any data flow (Data Flow)
- Load LMDB databases in a more customizable way (LMDB Loading)
And many out-of-the-box image restoration models, including:
- 2017: EDSR [CVPRW]
- 2018: RCAN [ECCV], RDN [CVPR]
- 2019: IMDN [ACM MM], RNAN [ICLR]
- 2020: CSNLN [CVPR], LAPAR [NeurIPS], LatticeNet [ECCV], PAN [ECCV], RFDN [ECCV], SAN [CVPR], HAN [ECCV]
- 2021: FDIWN [AAAI], HSENet [TGRS], SwinIR [ICCV]
- 2022: BSRN [CVPRW], ELAN [ECCV], ESRT [CVPRW], LBNet [IJCAI], NAFNet [ECCV], RLFN [CVPRW], SCET [CVPRW], MAN [ARXIV], ShuffleMixer [NeurIPS], FMEN [CVPRW], HNCT[CVPRW], EFDN[CVPRW], vapSR[ECCVW]
We hope this repository helps your work.
Table of contents
<!--ts--> <!--te-->FRL News
23.02.05 Preview. We are working on a new work on image super-resolution, the performance of which is shown in the figure below. The manuscript and code resources will be released as soon as possible.

23.01.31 Release the code resources of ESWT 🎉
23.01.24 Release the manuscript of our new work ESWT on arXiv
23.01.11 FRL code v2.0 released
22.11.15 Here we are 🪧
Our Works
(ESWT) Image Super-Resolution using Efficient Striped Window Transformer [arXiv]
Jinpeng Shi*^, Hui Li, Tianle Liu, Yulong Liu, Mingjian Zhang, Jinchen Zhu, Ling Zheng, Shizhuang Weng^
Transformers have achieved remarkable results in single-image super-resolution (SR). However, the challenge of balancing model performance and complexity has hindered their application in lightweight SR (LSR). To tackle this challenge, we propose an efficient striped window transformer (ESWT). We revisit the normalization layer in the transformer and design a concise and efficient transformer structure to build the ESWT. Furthermore, we introduce a striped window mechanism to model long-term dependencies more efficiently. To fully exploit the potential of the ESWT, we propose a novel flexible window training strategy that can improve the performance of the ESWT without additional cost. Extensive experiments show that ESWT outperforms state-of-the-art LSR transformers, and achieves a better trade-off between model performance and complexity. The ESWT requires fewer parameters, incurs faster inference, smaller FLOPs, and less memory consumption, making it a promising solution for LSR. [More details and reproduction guidance]
*: (Co-)first author(s)
^: (Co-)corresponding author(s)
How to Use
1 Preparation
1.1 Environment
Use the following command to build the Python environment:
conda create -n frl python
conda activate frl
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple # Mainland China only!
pip install torch torchvision basicsr einops timm matplotlib
1.2 Dataset
You can download the datasets you need from our OneDrive and place the downloaded datasets in the folder datasets. To use the YML profile we provide, keep the local folder datasets in the same directory tree as the OneDrive folder datasets.
| Task | Dataset | Relative Path | | --------- | -------- | ---------------------------- | | SISR | DF2K | datasets/sr_data/DF2K | | | Set5 | datasets/sr_data/Set5 | | | Set14 | datasets/sr_data/Set14 | | | BSD100 | datasets/sr_data/BSD100 | | | Urban100 | datasets/sr_data/Urban100 | | | Manga109 | datasets/sr_data/Manga109 | | Denoising | SIDD | datasets/denoising_data/SIDD |
🤠 All datasets have been processed in IMDB format and do not require any additional processing. The processing of the SISR dataset refers to the BasicSR document, and the processing of the denoising dataset refers to the NAFNet document.
🤠 To verify the integrity of your download, please refer to
docs/md5.txt.
1.3 Pretraining Weight
You can download the pretraining weights you need from our OneDrive and place the downloaded pretraining weights in the folder modelzoo. To use the YML configuration files we provide, keep the local folder modelzoo in the same directory tree as the OneDrive folder modelzoo.
| Source | Model | Relative Path | | ---------- | -------------------------------------------- | ------------- | | Official | ESWT | modelzoo/ESWT | | Unofficial | ELAN | modelzoo/ELAN |
🤠 The unofficial pre-trained weights are trained by us. The experimental conditions are exactly the same as in their paper.
2 Run
When running the FRL code, unlike BasicSR, you must specify two YML configuration files. The run command should be as follows:
python ${function.py} -expe_opt ${expe.yml} -task_opt ${task.yml}
${function.py}is the function you want to run, e.g.test.py${expe.yml}is the path to the experimental YML configuration file that contains the model-related and training-related configuration, e.g.expe/ESWT/ESWT_LSR.yml${task.yml}is the path to the task YML configuration file that contains the task-related configuration, e.g.expe/task/LSR_x4.yml
🤠 A complete experiment consists of three parts: the data, the model, and the training strategy. This design allows their configuration to be decoupled.
For your convenience, we provide a demo test set datasets/demo_data/Demo_Set5 and a demo pre-training weight modelzoo/ELAN/ESWT-24-6_LSR_x4.pth. Use the following commands to try out the main functions of the FRL code.
2.1 Train
This function will train a specified model.
python train.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

🤠 Use the following demo command instead if you prefer to run in CPU mode:
python train.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml --force_yml num_gpu=0
2.2 Test
This function will test the performance of a specified model on a specified task.
python test.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

2.3 Analyse
This function will analyze the complexity of a specified model on a specified task. Including the following metrics:
-
#Params: total number of learnable parameters
-
#FLOPs: abbreviation of floating point operations
-
#Acts: number of elements of all outputs of convolutional layers
-
#Conv: number of convolutional layers
-
#Memory: maximum GPU memory consumption when inferring a dataset
-
#Ave. Time: average inference time per image in a dataset
python analyse.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

2.4 Interpret
This function comes from the paper "Interpreting Super-Resolution Networks with Local Attribution Maps". When reconstructing the patches marked with red boxes, a higher DI indicates involving a larger range of contextual information, and a darker color indicates a higher degree of contribution.
python interpret.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml


2.5 Infer
You can use this function to restore your own image.
python infer.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

2.6 CKA
This function comes from the paper "Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth", which a
Related Skills
claude-opus-4-5-migration
111.3kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
352.5kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
TrendRadar
51.2k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
mcp-for-beginners
15.8kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
