ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.12792
72
1

CLIC: Contrastive Learning Framework for Unsupervised Image Complexity Representation

19 November 2024
Shipeng Liu
Liang Zhao
Dengfeng Chen
    SSL
ArXivPDFHTML
Abstract

As a fundamental visual attribute, image complexity significantly influences both human perception and the performance of computer vision models. However, accurately assessing and quantifying image complexity remains a challenging task. (1) Traditional metrics such as information entropy and compression ratio often yield coarse and unreliable estimates. (2) Data-driven methods require expensive manual annotations and are inevitably affected by human subjective biases. To address these issues, we propose CLIC, an unsupervised framework based on Contrastive Learning for learning Image Complexity representations. CLIC learns complexity-aware features from unlabeled data, thereby eliminating the need for costly labeling. Specifically, we design a novel positive and negative sample selection strategy to enhance the discrimination of complexity features. Additionally, we introduce a complexity-aware loss function guided by image priors to further constrain the learning process. Extensive experiments validate the effectiveness of CLIC in capturing image complexity. When fine-tuned with a small number of labeled samples from IC9600, CLIC achieves performance competitive with supervised methods. Moreover, applying CLIC to downstream tasks consistently improves performance. Notably, both the pretraining and application processes of CLIC are free from subjective bias.

View on arXiv
@article{liu2025_2411.12792,
  title={ CLIC: Contrastive Learning Framework for Unsupervised Image Complexity Representation },
  author={ Shipeng Liu and Liang Zhao and Dengfeng Chen },
  journal={arXiv preprint arXiv:2411.12792},
  year={ 2025 }
}
Comments on this paper