ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.01186
22
34

The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing

2 February 2023
Xingyu Xu
Yandi Shen
Yuejie Chi
Cong Ma
ArXivPDFHTML
Abstract

We propose \textsf{ScaledGD(\lambda)}, a preconditioned gradient descent method to tackle the low-rank matrix sensing problem when the true rank is unknown, and when the matrix is possibly ill-conditioned. Using overparametrized factor representations, \textsf{ScaledGD(\lambda)} starts from a small random initialization, and proceeds by gradient descent with a specific form of damped preconditioning to combat bad curvatures induced by overparameterization and ill-conditioning. At the expense of light computational overhead incurred by preconditioners, \textsf{ScaledGD(\lambda)} is remarkably robust to ill-conditioning compared to vanilla gradient descent (GD\textsf{GD}GD) even with overprameterization. Specifically, we show that, under the Gaussian design, \textsf{ScaledGD(\lambda)} converges to the true low-rank matrix at a constant linear rate after a small number of iterations that scales only logarithmically with respect to the condition number and the problem dimension. This significantly improves over the convergence rate of vanilla GD\textsf{GD}GD which suffers from a polynomial dependency on the condition number. Our work provides evidence on the power of preconditioning in accelerating the convergence without hurting generalization in overparameterized learning.

View on arXiv
Comments on this paper