ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.02707
  4. Cited By
Algorithms of Robust Stochastic Optimization Based on Mirror Descent
  Method

Algorithms of Robust Stochastic Optimization Based on Mirror Descent Method

5 July 2019
A. Juditsky
A. Nazin
A. S. Nemirovsky
Alexandre B. Tsybakov
ArXiv (abs)PDFHTML

Papers citing "Algorithms of Robust Stochastic Optimization Based on Mirror Descent Method"

34 / 34 papers shown
Title
Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise
Convergence of Clipped-SGD for Convex (L0,L1)(L_0,L_1)(L0​,L1​)-Smooth Optimization with Heavy-Tailed Noise
S. Chezhegov
Aleksandr Beznosikov
Samuel Horváth
Eduard A. Gorbunov
24
0
0
27 May 2025
From Gradient Clipping to Normalization for Heavy Tailed SGD
From Gradient Clipping to Normalization for Heavy Tailed SGD
Florian Hübler
Ilyas Fatkhullin
Niao He
113
10
0
17 Oct 2024
Making Robust Generalizers Less Rigid with Loss Concentration
Making Robust Generalizers Less Rigid with Loss Concentration
Matthew J. Holland
Toma Hamada
OOD
85
0
0
07 Aug 2024
Taming Nonconvex Stochastic Mirror Descent with General Bregman
  Divergence
Taming Nonconvex Stochastic Mirror Descent with General Bregman Divergence
Ilyas Fatkhullin
Niao He
76
4
0
27 Feb 2024
The Price of Adaptivity in Stochastic Convex Optimization
The Price of Adaptivity in Stochastic Convex Optimization
Y. Carmon
Oliver Hinder
85
7
0
16 Feb 2024
General Tail Bounds for Non-Smooth Stochastic Mirror Descent
General Tail Bounds for Non-Smooth Stochastic Mirror Descent
Khaled Eldowa
Andrea Paudice
74
6
0
12 Dec 2023
Smoothed Gradient Clipping and Error Feedback for Distributed
  Optimization under Heavy-Tailed Noise
Smoothed Gradient Clipping and Error Feedback for Distributed Optimization under Heavy-Tailed Noise
Shuhua Yu
D. Jakovetić
S. Kar
62
1
0
25 Oct 2023
Robust Stochastic Optimization via Gradient Quantile Clipping
Robust Stochastic Optimization via Gradient Quantile Clipping
Ibrahim Merad
Stéphane Gaïffas
79
2
0
29 Sep 2023
Clip21: Error Feedback for Gradient Clipping
Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat
Eduard A. Gorbunov
Samuel Horváth
Rustem Islamov
Fakhri Karray
Peter Richtárik
61
10
0
30 May 2023
Differentially Private Stochastic Convex Optimization in (Non)-Euclidean
  Space Revisited
Differentially Private Stochastic Convex Optimization in (Non)-Euclidean Space Revisited
Jinyan Su
Changhong Zhao
Di Wang
70
5
0
31 Mar 2023
High Probability Convergence of Stochastic Gradient Methods
High Probability Convergence of Stochastic Gradient Methods
Zijian Liu
Ta Duy Nguyen
Thien Hai Nguyen
Alina Ene
Huy Le Nguyen
55
45
0
28 Feb 2023
Large deviations rates for stochastic gradient descent with strongly
  convex functions
Large deviations rates for stochastic gradient descent with strongly convex functions
Dragana Bajović
D. Jakovetić
S. Kar
72
6
0
02 Nov 2022
High Probability Bounds for Stochastic Subgradient Schemes with Heavy
  Tailed Noise
High Probability Bounds for Stochastic Subgradient Schemes with Heavy Tailed Noise
D. A. Parletta
Andrea Paudice
Massimiliano Pontil
Saverio Salzo
116
10
0
17 Aug 2022
Clipped Stochastic Methods for Variational Inequalities with
  Heavy-Tailed Noise
Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise
Eduard A. Gorbunov
Marina Danilova
David Dobre
Pavel Dvurechensky
Alexander Gasnikov
Gauthier Gidel
77
27
0
02 Jun 2022
Nonlinear gradient mappings and stochastic optimization: A general
  framework with applications to heavy-tail noise
Nonlinear gradient mappings and stochastic optimization: A general framework with applications to heavy-tail noise
D. Jakovetić
Dragana Bajović
Anit Kumar Sahu
S. Kar
Nemanja Milošević
Dusan Stamenkovic
62
14
0
06 Apr 2022
Flexible risk design using bi-directional dispersion
Flexible risk design using bi-directional dispersion
Matthew J. Holland
97
6
0
28 Mar 2022
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization
  under Infinite Noise Variance
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance
Nuri Mert Vural
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
50
23
0
23 Feb 2022
Stochastic linear optimization never overfits with quadratically-bounded
  losses on general data
Stochastic linear optimization never overfits with quadratically-bounded losses on general data
Matus Telgarsky
87
12
0
14 Feb 2022
Heavy-tailed Streaming Statistical Estimation
Heavy-tailed Streaming Statistical Estimation
Che-Ping Tsai
Adarsh Prasad
Sivaraman Balakrishnan
Pradeep Ravikumar
86
10
0
25 Aug 2021
Robust Online Convex Optimization in the Presence of Outliers
Robust Online Convex Optimization in the Presence of Outliers
T. Erven
Sarah Sachs
Wouter M. Koolen
W. Kotłowski
45
8
0
05 Jul 2021
Robust learning with anytime-guaranteed feedback
Robust learning with anytime-guaranteed feedback
Matthew J. Holland
OOD
26
0
0
24 May 2021
Parameter-free Gradient Temporal Difference Learning
Parameter-free Gradient Temporal Difference Learning
Andrew Jacobsen
Alan Chan
OffRL
62
2
0
10 May 2021
Convergence Rates of Stochastic Gradient Descent under Infinite Noise
  Variance
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance
Hongjian Wang
Mert Gurbuzbalaban
Lingjiong Zhu
Umut cSimcsekli
Murat A. Erdogdu
83
42
0
20 Feb 2021
Optimal robust mean and location estimation via convex programs with
  respect to any pseudo-norms
Optimal robust mean and location estimation via convex programs with respect to any pseudo-norms
Jules Depersin
Guillaume Lecué
84
12
0
01 Feb 2021
Better scalability under potentially heavy-tailed feedback
Better scalability under potentially heavy-tailed feedback
Matthew J. Holland
52
1
0
14 Dec 2020
Nearly Optimal Robust Method for Convex Compositional Problems with
  Heavy-Tailed Noise
Nearly Optimal Robust Method for Convex Compositional Problems with Heavy-Tailed Noise
Yan Yan
Xin Man
Tianbao Yang
21
0
0
17 Jun 2020
Sparse recovery by reduced variance stochastic approximation
Sparse recovery by reduced variance stochastic approximation
A. Juditsky
A. Kulunchakov
Hlib Tsyntseus
48
7
0
11 Jun 2020
Improved scalability under heavy tails, without strong convexity
Matthew J. Holland
26
1
0
02 Jun 2020
Better scalability under potentially heavy-tailed gradients
Matthew J. Holland
49
1
0
01 Jun 2020
Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
  Clipping
Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping
Eduard A. Gorbunov
Marina Danilova
Alexander Gasnikov
68
123
0
21 May 2020
A termination criterion for stochastic gradient descent for binary
  classification
A termination criterion for stochastic gradient descent for binary classification
Sina Baghal
Courtney Paquette
S. Vavasis
29
0
0
23 Mar 2020
ERM and RERM are optimal estimators for regression problems when
  malicious outliers corrupt the labels
ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
Chinot Geoffrey
74
13
0
24 Oct 2019
A generalization of regularized dual averaging and its dynamics
A generalization of regularized dual averaging and its dynamics
Shih-Kang Chao
Guang Cheng
60
18
0
22 Sep 2019
From low probability to high confidence in stochastic convex
  optimization
From low probability to high confidence in stochastic convex optimization
Damek Davis
Dmitriy Drusvyatskiy
Lin Xiao
Junyu Zhang
73
5
0
31 Jul 2019
1