ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.10346
  4. Cited By
Convergence Rates of Stochastic Gradient Descent under Infinite Noise
  Variance

Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance

20 February 2021
Hongjian Wang
Mert Gurbuzbalaban
Lingjiong Zhu
Umut cSimcsekli
Murat A. Erdogdu
ArXivPDFHTML

Papers citing "Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance"

12 / 12 papers shown
Title
From Gradient Clipping to Normalization for Heavy Tailed SGD
From Gradient Clipping to Normalization for Heavy Tailed SGD
Florian Hübler
Ilyas Fatkhullin
Niao He
40
5
0
17 Oct 2024
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent
  Algorithm
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent Algorithm
Jinwei Zhao
Marco Gori
Alessandro Betti
S. Melacci
Hongtao Zhang
Jiedong Liu
Xinhong Hei
30
0
0
10 Sep 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
39
3
0
22 May 2024
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Umut Simsekli
Mert Gurbuzbalaban
S. Yıldırım
Lingjiong Zhu
38
2
0
04 Mar 2024
From Mutual Information to Expected Dynamics: New Generalization Bounds
  for Heavy-Tailed SGD
From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed SGD
Benjamin Dupuis
Paul Viallard
18
3
0
01 Dec 2023
Provably Robust Temporal Difference Learning for Heavy-Tailed Rewards
Provably Robust Temporal Difference Learning for Heavy-Tailed Rewards
Semih Cayci
A. Eryilmaz
18
2
0
20 Jun 2023
Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises:
  High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation
Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises: High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation
Zijian Liu
Zhengyuan Zhou
24
10
0
22 Mar 2023
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than
  Constant Stepsize
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize
Mert Gurbuzbalaban
Yuanhan Hu
Umut Simsekli
Lingjiong Zhu
LRM
20
1
0
10 Feb 2023
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Anant Raj
Lingjiong Zhu
Mert Gurbuzbalaban
Umut Simsekli
26
15
0
27 Jan 2023
Huber-Robust Confidence Sequences
Huber-Robust Confidence Sequences
Hongjian Wang
Aaditya Ramdas
18
13
0
23 Jan 2023
Taming Fat-Tailed ("Heavier-Tailed'' with Potentially Infinite Variance)
  Noise in Federated Learning
Taming Fat-Tailed ("Heavier-Tailed'' with Potentially Infinite Variance) Noise in Federated Learning
Haibo Yang
Pei-Yuan Qiu
Jia Liu
FedML
27
12
0
03 Oct 2022
An Analysis of Constant Step Size SGD in the Non-convex Regime:
  Asymptotic Normality and Bias
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
35
50
0
14 Jun 2020
1