ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.03917
17
172

Dropping Convexity for Faster Semi-definite Optimization

14 September 2015
Srinadh Bhojanapalli
Anastasios Kyrillidis
Sujay Sanghavi
ArXivPDFHTML
Abstract

We study the minimization of a convex function f(X)f(X)f(X) over the set of n×nn\times nn×n positive semi-definite matrices, but when the problem is recast as min⁡Ug(U):=f(UU⊤)\min_U g(U) := f(UU^\top)minU​g(U):=f(UU⊤), with U∈Rn×rU \in \mathbb{R}^{n \times r}U∈Rn×r and r≤nr \leq nr≤n. We study the performance of gradient descent on ggg---which we refer to as Factored Gradient Descent (FGD)---under standard assumptions on the original function fff. We provide a rule for selecting the step size and, with this choice, show that the local convergence rate of FGD mirrors that of standard gradient descent on the original fff: i.e., after kkk steps, the error is O(1/k)O(1/k)O(1/k) for smooth fff, and exponentially small in kkk when fff is (restricted) strongly convex. In addition, we provide a procedure to initialize FGD for (restricted) strongly convex objectives and when one only has access to fff via a first-order oracle; for several problem instances, such proper initialization leads to global convergence guarantees. FGD and similar procedures are widely used in practice for problems that can be posed as matrix factorization. To the best of our knowledge, this is the first paper to provide precise convergence rate guarantees for general convex functions under standard convex assumptions.

View on arXiv
Comments on this paper