ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.03640
16
0

Training Neural Networks on RAW and HDR Images for Restoration Tasks

6 December 2023
Lei Luo
Alexandre Chapiro
Xiaoyu Xiang
Yuchen Fan
Rakesh Ranjan
Rafal Mantiuk
Rafał K. Mantiuk
ArXivPDFHTML
Abstract

The vast majority of standard image and video content available online is represented in display-encoded color spaces, in which pixel values are conveniently scaled to a limited range (0-1) and the color distribution is approximately perceptually uniform. In contrast, both camera RAW and high dynamic range (HDR) images are often represented in linear color spaces, in which color values are linearly related to colorimetric quantities of light. While training on commonly available display-encoded images is a well-established practice, there is no consensus on how neural networks should be trained for tasks on RAW and HDR images in linear color spaces. In this work, we test several approaches on three popular image restoration applications: denoising, deblurring, and single-image super-resolution. We examine whether HDR/RAW images need to be display-encoded using popular transfer functions (PQ, PU21, and mu-law), or whether it is better to train in linear color spaces, but use loss functions that correct for perceptual non-uniformity. Our results indicate that neural networks train significantly better on HDR and RAW images represented in display-encoded color spaces, which offer better perceptual uniformity than linear spaces. This small change to the training strategy can bring a very substantial gain in performance, between 2 and 9 dB.

View on arXiv
@article{ke2025_2312.03640,
  title={ Training Neural Networks on RAW and HDR Images for Restoration Tasks },
  author={ Andrew Yanzhe Ke and Lei Luo and Xiaoyu Xiang and Yuchen Fan and Rakesh Ranjan and Alexandre Chapiro and Rafał K. Mantiuk },
  journal={arXiv preprint arXiv:2312.03640},
  year={ 2025 }
}
Comments on this paper