ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16264
31
0

Do image and video quality metrics model low-level human vision?

20 March 2025
Dounia Hammou
Yancheng Cai
Pavan Madhusudanarao
Christos G. Bampis
Rafał K. Mantiuk
ArXivPDFHTML
Abstract

Image and video quality metrics, such as SSIM, LPIPS, and VMAF, are aimed to predict the perceived quality of the evaluated content and are often claimed to be "perceptual". Yet, few metrics directly model human visual perception, and most rely on hand-crafted formulas or training datasets to achieve alignment with perceptual data. In this paper, we propose a set of tests for full-reference quality metrics that examine their ability to model several aspects of low-level human vision: contrast sensitivity, contrast masking, and contrast matching. The tests are meant to provide additional scrutiny for newly proposed metrics. We use our tests to analyze 33 existing image and video quality metrics and find their strengths and weaknesses, such as the ability of LPIPS and MS-SSIM to predict contrast masking and poor performance of VMAF in this task. We further find that the popular SSIM metric overemphasizes differences in high spatial frequencies, but its multi-scale counterpart, MS-SSIM, addresses this shortcoming. Such findings cannot be easily made using existing evaluation protocols.

View on arXiv
@article{hammou2025_2503.16264,
  title={ Do image and video quality metrics model low-level human vision? },
  author={ Dounia Hammou and Yancheng Cai and Pavan Madhusudanarao and Christos G. Bampis and Rafał K. Mantiuk },
  journal={arXiv preprint arXiv:2503.16264},
  year={ 2025 }
}
Comments on this paper