ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.09929
30
6

Data Augmentation of Contrastive Learning is Estimating Positive-incentive Noise

19 August 2024
Hongyuan Zhang
Yanchen Xu
Sida Huang
Xuelong Li
ArXivPDFHTML
Abstract

Inspired by the idea of Positive-incentive Noise (Pi-Noise or π\piπ-Noise) that aims at learning the reliable noise beneficial to tasks, we scientifically investigate the connection between contrastive learning and π\piπ-noise in this paper. By converting the contrastive loss to an auxiliary Gaussian distribution to quantitatively measure the difficulty of the specific contrastive model under the information theory framework, we properly define the task entropy, the core concept of π\piπ-noise, of contrastive learning. It is further proved that the predefined data augmentation in the standard contrastive learning paradigm can be regarded as a kind of point estimation of π\piπ-noise. Inspired by the theoretical study, a framework that develops a π\piπ-noise generator to learn the beneficial noise (instead of estimation) as data augmentations for contrast is proposed. The designed framework can be applied to diverse types of data and is also completely compatible with the existing contrastive models. From the visualization, we surprisingly find that the proposed method successfully learns effective augmentations.

View on arXiv
Comments on this paper