ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.03535
22
1

Noise Regularizes Over-parameterized Rank One Matrix Recovery, Provably

7 February 2022
Tianyi Liu
Yan Li
Enlu Zhou
Tuo Zhao
ArXivPDFHTML
Abstract

We investigate the role of noise in optimization algorithms for learning over-parameterized models. Specifically, we consider the recovery of a rank one matrix Y∗∈Rd×dY^*\in R^{d\times d}Y∗∈Rd×d from a noisy observation YYY using an over-parameterization model. We parameterize the rank one matrix Y∗Y^*Y∗ by XX⊤XX^\topXX⊤, where X∈Rd×dX\in R^{d\times d}X∈Rd×d. We then show that under mild conditions, the estimator, obtained by the randomly perturbed gradient descent algorithm using the square loss function, attains a mean square error of O(σ2/d)O(\sigma^2/d)O(σ2/d), where σ2\sigma^2σ2 is the variance of the observational noise. In contrast, the estimator obtained by gradient descent without random perturbation only attains a mean square error of O(σ2)O(\sigma^2)O(σ2). Our result partially justifies the implicit regularization effect of noise when learning over-parameterized models, and provides new understanding of training over-parameterized neural networks.

View on arXiv
Comments on this paper