ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.17470
26
0

An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue Optimization Problems

30 June 2023
Clément Lezane
Cristóbal Guzmán
Alexandre d’Aspremont
ArXivPDFHTML
Abstract

In this work, we revisit the problem of solving large-scale semidefinite programs using randomized first-order methods and stochastic smoothing. We introduce two oblivious stochastic mirror descent algorithms based on a complementary composite setting. One algorithm is designed for non-smooth objectives, while an accelerated version is tailored for smooth objectives. Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function. For the non-smooth case with M−\mathcal{M}-M−bounded oracles, we prove a convergence rate of O(M/T) O( {\mathcal{M}}/{\sqrt{T}} ) O(M/T​). For the LLL-smooth case with a feasible set bounded by DDD, we derive a convergence rate of O(L2D2/(T2T)+(D02+σ2)/T) O( {L^2 D^2}/{(T^{2}\sqrt{T})} + {(D_0^2+\sigma^2)}/{\sqrt{T}} )O(L2D2/(T2T​)+(D02​+σ2)/T​), where D0D_0D0​ is the starting distance to an optimal solution, and σ2 \sigma^2σ2 is the stochastic oracle variance. These rates had only been obtained so far by either assuming prior knowledge of the Lipschitz constant or the starting distance to an optimal solution. We further show how to extend our framework to relative scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.

View on arXiv
Comments on this paper