ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09481
14
29

Stochastic Bias-Reduced Gradient Methods

17 June 2021
Hilal Asi
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
ArXivPDFHTML
Abstract

We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer x⋆x_\starx⋆​ of any Lipschitz strongly-convex function. In particular, we use a multilevel Monte-Carlo approach due to Blanchet and Glynn to turn any optimal stochastic gradient method into an estimator of x⋆x_\starx⋆​ with bias δ\deltaδ, variance O(log⁡(1/δ))O(\log(1/\delta))O(log(1/δ)), and an expected sampling cost of O(log⁡(1/δ))O(\log(1/\delta))O(log(1/δ)) stochastic gradient evaluations. As an immediate consequence, we obtain cheap and nearly unbiased gradient estimators for the Moreau-Yoshida envelope of any Lipschitz convex function, allowing us to perform dimension-free randomized smoothing. We demonstrate the potential of our estimator through four applications. First, we develop a method for minimizing the maximum of NNN functions, improving on recent results and matching a lower bound up to logarithmic factors. Second and third, we recover state-of-the-art rates for projection-efficient and gradient-efficient optimization using simple algorithms with a transparent analysis. Finally, we show that an improved version of our estimator would yield a nearly linear-time, optimal-utility, differentially-private non-smooth stochastic optimization method.

View on arXiv
Comments on this paper