SkillAgentSearch skills...

VIPNet

VIPNet: Visual Interaction Perceptual Network for Blind Image Quality Assessment

Install / Use

/learn @XiaoqiWang/VIPNet
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

paper tmm Python 3.8 OpenSource Love GitHub

:v::thumbsup::triangular_flag_on_post: $\mathbb{VIP}$<b>Net</b>: <b>Visual Interaction Perceptual Network</b> for <b> Blind Image Quality Assessment</b>


:file_folder: Intro

Click here for file information

:arrows_counterclockwise: Workflow

To retrain DPM, you can either follow the steps below or download the author's pre-trained weights directly.

  • Generate distorted images. Relevant descriptions can be found below.
  • Train the distortion perception model by ensuring the correct dataset path and executing the following command:
bash train_dpm.sh
  • After training is complete, move the best pre-trained weights to folder 'pretrained_model'.

Next, evaluate the proposed model on IQA datasets using the following steps:

  • For a single-dataset test, please refer to the configs.py file for additional parameters. Then, execute the following command:
bash train.py
  • For a cross-dataset test, please refer to the configs.py file for additional parameters. Then, execute the following command:
bash train_cross_datast.sh

:bar_chart: Dataset

  • The study generated 30 types of distorted images, of which 25 are identical to those in the KADID-10k dataset and can be obtained by running the 'dataset_generator.m' file in Matlab.
  • The additional four types of distorted images, namely pink noise, contrast change, underexposure, and overexposure, can be generated by running the 'additional_dataset_generator.m' file.
  • The lossy compression distorted images can be downloaded from this link.

Dataset has 6 million images and needs 3TB storage. Pre-trained models can be downloaded if needed.

:gear: Model

The pre-trained DPM models can be downloaded from Google Drive or Baidu Cloud and save it to folder 'pretrained_model'.

:bookmark_tabs: Citation

If our research has been helpful to you, please consider citing our paper in your work.

@ARTICLE{vipnet2023_wang,
  author={Wang, Xiaoqi and Xiong, Jian and Lin, Weisi},
  journal={IEEE Transactions on Multimedia}, 
  title={Visual Interaction Perceptual Network for Blind Image Quality Assessment}, 
  year={2023},
  volume={25},
  number={},
  pages={8958-8971},
  doi={10.1109/TMM.2023.3243683}}

💖 Acknowledgement

Thanks to the contributors of GitHub repositories HyperIQA and BoTNet, whose parts of the code I referenced while developing this project.

View on GitHub
GitHub Stars11
CategoryDevelopment
Updated4mo ago
Forks0

Languages

Python

Security Score

87/100

Audited on Nov 19, 2025

No findings