ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03345
32
23

Preconditioned Gradient Descent for Overparameterized Nonconvex Burer--Monteiro Factorization with Global Optimality Certification

7 June 2022
G. Zhang
S. Fattahi
Richard Y. Zhang
ArXivPDFHTML
Abstract

We consider using gradient descent to minimize the nonconvex function f(X)=ϕ(XXT)f(X)=\phi(XX^{T})f(X)=ϕ(XXT) over an n×rn\times rn×r factor matrix XXX, in which ϕ\phiϕ is an underlying smooth convex cost function defined over n×nn\times nn×n matrices. While only a second-order stationary point XXX can be provably found in reasonable time, if XXX is additionally rank deficient, then its rank deficiency certifies it as being globally optimal. This way of certifying global optimality necessarily requires the search rank rrr of the current iterate XXX to be overparameterized with respect to the rank r⋆r^{\star}r⋆ of the global minimizer X⋆X^{\star}X⋆. Unfortunately, overparameterization significantly slows down the convergence of gradient descent, from a linear rate with r=r⋆r=r^{\star}r=r⋆ to a sublinear rate when r>r⋆r>r^{\star}r>r⋆, even when ϕ\phiϕ is strongly convex. In this paper, we propose an inexpensive preconditioner that restores the convergence rate of gradient descent back to linear in the overparameterized case, while also making it agnostic to possible ill-conditioning in the global minimizer X⋆X^{\star}X⋆.

View on arXiv
@article{zhang2025_2206.03345,
  title={ Preconditioned Gradient Descent for Overparameterized Nonconvex Burer--Monteiro Factorization with Global Optimality Certification },
  author={ Gavin Zhang and Salar Fattahi and Richard Y. Zhang },
  journal={arXiv preprint arXiv:2206.03345},
  year={ 2025 }
}
Comments on this paper