ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.12277
14
10

Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises: High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation

22 March 2023
Zijian Liu
Zhengyuan Zhou
ArXivPDFHTML
Abstract

Recently, several studies consider the stochastic optimization problem but in a heavy-tailed noise regime, i.e., the difference between the stochastic gradient and the true gradient is assumed to have a finite ppp-th moment (say being upper bounded by σp\sigma^{p}σp for some σ≥0\sigma\geq0σ≥0) where p∈(1,2]p\in(1,2]p∈(1,2], which not only generalizes the traditional finite variance assumption (p=2p=2p=2) but also has been observed in practice for several different tasks. Under this challenging assumption, lots of new progress has been made for either convex or nonconvex problems, however, most of which only consider smooth objectives. In contrast, people have not fully explored and well understood this problem when functions are nonsmooth. This paper aims to fill this crucial gap by providing a comprehensive analysis of stochastic nonsmooth convex optimization with heavy-tailed noises. We revisit a simple clipping-based algorithm, whereas, which is only proved to converge in expectation but under the additional strong convexity assumption. Under appropriate choices of parameters, for both convex and strongly convex functions, we not only establish the first high-probability rates but also give refined in-expectation bounds compared with existing works. Remarkably, all of our results are optimal (or nearly optimal up to logarithmic factors) with respect to the time horizon TTT even when TTT is unknown in advance. Additionally, we show how to make the algorithm parameter-free with respect to σ\sigmaσ, in other words, the algorithm can still guarantee convergence without any prior knowledge of σ\sigmaσ. Furthermore, an initial distance adaptive convergence rate is provided if σ\sigmaσ is assumed to be known.

View on arXiv
Comments on this paper