ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.18784
15
4

High-probability Convergence Bounds for Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise

28 October 2023
Aleksandar Armacki
Pranay Sharma
Gauri Joshi
Dragana Bajović
D. Jakovetić
S. Kar
ArXivPDFHTML
Abstract

We study high-probability convergence guarantees of learning on streaming data in the presence of heavy-tailed noise. In the proposed scenario, the model is updated in an online fashion, as new information is observed, without storing any additional data. To combat the heavy-tailed noise, we consider a general framework of nonlinear stochastic gradient descent (SGD), providing several strong results. First, for non-convex costs and component-wise nonlinearities, we establish a convergence rate arbitrarily close to O(t−14)\mathcal{O}\left(t^{-\frac{1}{4}}\right)O(t−41​), whose exponent is independent of noise and problem parameters. Second, for strongly convex costs and component-wise nonlinearities, we establish a rate arbitrarily close to O(t−12)\mathcal{O}\left(t^{-\frac{1}{2}}\right)O(t−21​) for the weighted average of iterates, with exponent again independent of noise and problem parameters. Finally, for strongly convex costs and a broader class of nonlinearities, we establish convergence of the last iterate, with a rate O(t−ζ)\mathcal{O}\left(t^{-\zeta} \right)O(t−ζ), where ζ∈(0,1)\zeta \in (0,1)ζ∈(0,1) depends on problem parameters, noise and nonlinearity. As we show analytically and numerically, ζ\zetaζ can be used to inform the preferred choice of nonlinearity for given problem settings. Compared to state-of-the-art, who only consider clipping, require bounded noise moments of order η∈(1,2]\eta \in (1,2]η∈(1,2], and establish convergence rates whose exponents go to zero as η→1\eta \rightarrow 1η→1, we provide high-probability guarantees for a much broader class of nonlinearities and symmetric density noise, with convergence rates whose exponents are bounded away from zero, even when the noise has finite first moment only. Moreover, in the case of strongly convex functions, we demonstrate analytically and numerically that clipping is not always the optimal nonlinearity, further underlining the value of our general framework.

View on arXiv
Comments on this paper