ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.19824
31
0

Taming the Randomness: Towards Label-Preserving Cropping in Contrastive Learning

28 April 2025
Mohamed Hassan
Mohammad Wasil
Sebastian Houben
ArXivPDFHTML
Abstract

Contrastive learning (CL) approaches have gained great recognition as a very successful subset of self-supervised learning (SSL) methods. SSL enables learning from unlabeled data, a crucial step in the advancement of deep learning, particularly in computer vision (CV), given the plethora of unlabeled image data. CL works by comparing different random augmentations (e.g., different crops) of the same image, thus achieving self-labeling. Nevertheless, randomly augmenting images and especially random cropping can result in an image that is semantically very distant from the original and therefore leads to false labeling, hence undermining the efficacy of the methods. In this research, two novel parameterized cropping methods are introduced that increase the robustness of self-labeling and consequently increase the efficacy. The results show that the use of these methods significantly improves the accuracy of the model by between 2.7\% and 12.4\% on the downstream task of classifying CIFAR-10, depending on the crop size compared to that of the non-parameterized random cropping method.

View on arXiv
@article{hassan2025_2504.19824,
  title={ Taming the Randomness: Towards Label-Preserving Cropping in Contrastive Learning },
  author={ Mohamed Hassan and Mohammad Wasil and Sebastian Houben },
  journal={arXiv preprint arXiv:2504.19824},
  year={ 2025 }
}
Comments on this paper