SkillAgentSearch skills...

HDRTVDM

The official repo of "Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models" in CVPR2023.

Install / Use

/learn @AndreGuo/HDRTVDM
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

HDRTVDM

The official repo of paper "Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models" (paper (ArXiv), paper, supplementary material) in CVPR2023.

@InProceedings{Guo_2023_CVPR,
    author    = {Guo, Cheng and Fan, Leidong and Xue, Ziyu and Jiang, Xiuhua},
    title     = {Learning a Practical SDR-to-HDRTV Up-Conversion Using New Dataset and Degradation Models},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {22231-22241}
}

1. Introduction

1.1. Our scope

There're many HDR-related methods in this year's CVPR. Our method differs from others in that we take conventional SDR/BT.709 image to HDR/WCG in PQ/BT.2020 (which is called HDRTV by HDRTVNet(ICCV21)), and is meant to be applied in media industry.

Our task can be called: SDR-to-HDRTV, ITM (inverse tone-mapping) or HDR/WCG up-conversion.

Others methods may take single SDR to a linear-light-HDR in grapghics/rendering (SI-HDR, single-image HDR reconstruction), or merge several SDRs to single HDR in camera imaging pipeline (MEF-HDR, multi-exposure fusion HDR imaging). Please jump to them if you are interested.

1.2 What we provide

  • PyTorch implementaion of our luminance segmented network (LSN) with Transformer-UNet and self-adaptive convolution.
  • A new training set named HDRTV4K (3848 HDR/WCG-SDR image pairs, current 1235 the largest).
  • HDRTV4K's new test set (400 GT-LQ pairs, current 160 the largest), both test and training set provide 7 versions of degradation models.
  • MATLAB implementaion of non-reference HDR/WCG metrics FHLP/EHL/FWGP/EWG.
  • Other discussions...

1.3 Changelog

| Date | log | |:-------------:|:------:| | 13 Dec 2023 | Since most SoTAs are still trained with YouTude degradation model (DM), we add this DM to both our training and test set, so you can: (1) train your network with the YouTube version of HDRTV4K training set and get a similar look as SoTAs; (2) directly test SoTA's original checkpoint (trained with YouTube DM) using the YouTube version of HDRTV4K test set. | | 14 Jan 2024 | We change LSN (our network)'s default checkpoint to the one trained with commom HDRTV1K dataset (and YouTube DM), so you can directly compare it with SoTAs, by the old manner (PSNR, SSIM etc.). |

2. HDRTV4K Dataset (Training set & test set)

2.1 HDRTV4K Training set

Our major concerns on training data are:

| Aspect | Model's benefit | |:------------------------------------------------------------:|:------------------------------------------------------------------------------------:| | (1) Label HDR/WCG's (scene) diversity | better generalization ability | | (2) Label HDR/WCG's quality<br>(especially the amount of advanced color and luminance volume)| more chance to produce advanced HDR/WCG volume | | (3) SDR's extent of degradation | a proper degradation recovery ability | | (4) style and aesthetic of degraded SDR | better aesthetic performance<br>(or consistency from SDR) |

Hence, we provide HDRTV4K label HDR (3848 individual frames) of better (1) quality and (2) diversity, available on:

| Training set label HDR/WCG download | |:-----------------------------------:| | BaiduNetDisk, GoogleDrive(TODO) |

Atfer obtaining label HDR, you can:

2.1.1. OPTION 1: Download the coresponding degraded SDR below:

| SDR from Degradation Model (DM) | DM Usage | (3) Extent of degradation | (4) Style or aesthetic | Download | |:----:|:---:|:---------------------:|:---------------:|:--------:| | OCIO2 | our method | moderate | good | GoogleDrive, BaiduNetDisk (2.27GB) | | 2446c+GM | our method | moderate | good | GoogleDrive, BaiduNetDisk (2.03GB) | | HC+GM | our method | more | moderate | GoogleDrive, BaiduNetDisk (2.13GB) | | 2446a | Chen2021 | less | bad | BaiduNetDisk | | Reinhard | SR-ITM-GAN etc. | less | moderate | OneDrive, BaiduNetDisk | | YouTube | most other methods who use HDRTV1K or KAIST training set (if used, you can learn a silimar style as previous methods) | more | bad | GoogleDrive, BaiduNetDisk (2.51GB)<br> | | 2390EETF+GM | Zhang2023 | TODO | TODO | OneDrive, BaiduNetDisk | | DaVinci <a id='DaVinciSDR'>(w. different settings)</a> | another our algorithm ITM-LUT |less | good | GoogleDrive, BaiduNetDisk<br> |

and use any of them as the input to train your network.

Since our degradation models (DMs) are just a preliminary attempt on concerns (3) and (4), we encourage you to:

2.1.2. OPTION 2 (Encouraged): Use your own degradation model to obtain input SDR

In this case, you can:

  • Change the style and aesthetic of degraded SDR to better suit your own technical and artistic intention, or involve your expertise in color science etc. for more precise control between SDR and HDR.
  • Control the extent of degradation to follow the staticstics of target SDR in your own application scenario (e.g. remastering legacy SDR or converting on-the-air SDR). You can even add diversity on the extent of degradation to endow your network a generalizability to various extent of degradation.
  • Add new types of degradation e.g. camera noise, compression artifact, motion blur, chromatic aberration and film grain etc. for more specific application scenario. Their degradation models are relatively studied more with traditional and deep-learning model.

2.2 HDRTV4K Test set

The test set used in our paper (consecutive frames) is copyrighted and will not be relesed. We provided alternative test set which consists of 400 individual frames and even more scenes. HDRTV4K's test set share the similar concerns as training set:

| Better | The test set will manifest more algorithm's | |:------------------------------------------------------------:|:------------------------------------------------------------------------:| | (1) GT HDR/WCG's (scene) diversity | scene generalization ability | | (2) GT HDR/WCG's advanced color and luminance volume | mapping/expansion ability of advanced HDR/WCG volume | | (3a) Input SDR's extent of degradation | degradation recovery ability | | (3b) Input SDR's diversity of degradation | degradation generalization ability |

It's available on:

| Test set GT and LQ download | |:-----------------------------------:| | BaiduNetDisk and GoogleDrive(TODO) |

This package contains 1 version of GT and 7 versions of LQ by different degradation models, so:

  • You should test on the same test set (i.e. if your model is trained with OCIO2 SDR, you should also test it on OCIO2 SDR), otherwise conventional distance-

Related Skills

View on GitHub
GitHub Stars75
CategoryEducation
Updated8d ago
Forks6

Languages

Python

Security Score

95/100

Audited on Mar 29, 2026

No findings