ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.00042
22
0

Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis

24 March 2024
K. Khamaru
ArXivPDFHTML
Abstract

We consider the problem of stochastic convex optimization under convex constraints. We analyze the behavior of a natural variance reduced proximal gradient (VRPG) algorithm for this problem. Our main result is a non-asymptotic guarantee for VRPG algorithm. Contrary to minimax worst case guarantees, our result is instance-dependent in nature. This means that our guarantee captures the complexity of the loss function, the variability of the noise, and the geometry of the constraint set. We show that the non-asymptotic performance of the VRPG algorithm is governed by the scaled distance (scaled by N\sqrt{N}N​) between the solutions of the given problem and that of a certain small perturbation of the given problem -- both solved under the given convex constraints; here, NNN denotes the number of samples. Leveraging a well-established connection between local minimax lower bounds and solutions to perturbed problems, we show that as N→∞N \rightarrow \inftyN→∞, the VRPG algorithm achieves the renowned local minimax lower bound by H\`{a}jek and Le Cam up to universal constants and a logarithmic factor of the sample size.

View on arXiv
Comments on this paper