ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08628
52
0

Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks

11 April 2025
Chenyang Zhang
Peifeng Gao
Difan Zou
Yuan Cao
    OOD
    MLT
ArXivPDFHTML
Abstract

Modern neural networks are usually highly over-parameterized. Behind the wide usage of over-parameterized networks is the belief that, if the data are simple, then the trained network will be automatically equivalent to a simple predictor. Following this intuition, many existing works have studied different notions of "ranks" of neural networks and their relation to the rank of data. In this work, we study the rank of convolutional neural networks (CNNs) trained by gradient descent, with a specific focus on the robustness of the rank to image background noises. Specifically, we point out that, when adding background noises to images, the rank of the CNN trained with gradient descent is affected far less compared with the rank of the data. We support our claim with a theoretical case study, where we consider a particular data model to characterize low-rank clean images with added background noises. We prove that CNNs trained by gradient descent can learn the intrinsic dimension of clean images, despite the presence of relatively large background noises. We also conduct experiments on synthetic and real datasets to further validate our claim.

View on arXiv
@article{zhang2025_2504.08628,
  title={ Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks },
  author={ Chenyang Zhang and Peifeng Gao and Difan Zou and Yuan Cao },
  journal={arXiv preprint arXiv:2504.08628},
  year={ 2025 }
}
Comments on this paper