ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.04104
  4. Cited By
Convergence Rates for Deterministic and Stochastic Subgradient Methods
  Without Lipschitz Continuity
v1v2v3 (latest)

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity

12 December 2017
Benjamin Grimmer
ArXiv (abs)PDFHTML

Papers citing "Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity"

26 / 26 papers shown
Title
Glocal Smoothness: Line Search can really help!
Glocal Smoothness: Line Search can really help!
Curtis Fox
Aaron Mishkin
Sharan Vaswani
Mark Schmidt
41
2
0
14 Jun 2025
On the asymptotic behaviour of stochastic processes, with applications to supermartingale convergence, Dvoretzky's approximation theorem, and stochastic quasi-Fejér monotonicity
On the asymptotic behaviour of stochastic processes, with applications to supermartingale convergence, Dvoretzky's approximation theorem, and stochastic quasi-Fejér monotonicity
Morenikeji Neri
Nicholas Pischke
Thomas Powell
42
0
0
17 Apr 2025
Towards Weaker Variance Assumptions for Stochastic Optimization
Towards Weaker Variance Assumptions for Stochastic Optimization
Ahmet Alacaoglu
Yura Malitsky
Stephen J. Wright
82
1
0
14 Apr 2025
Understanding Gradient Orthogonalization for Deep Learning via Non-Euclidean Trust-Region Optimization
Understanding Gradient Orthogonalization for Deep Learning via Non-Euclidean Trust-Region Optimization
Dmitry Kovalev
139
5
0
16 Mar 2025
Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization
Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization
Benjamin Grimmer
Danlin Li
128
6
0
31 Dec 2024
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
Daniel Dodd
Louis Sharrock
Christopher Nemeth
122
0
0
04 Jun 2024
Towards Stability of Parameter-free Optimization
Towards Stability of Parameter-free Optimization
Yijiang Pang
Shuyang Yu
Hoang Bao
Jiayu Zhou
66
1
0
07 May 2024
Optimization on a Finer Scale: Bounded Local Subgradient Variation
  Perspective
Optimization on a Finer Scale: Bounded Local Subgradient Variation Perspective
Jelena Diakonikolas
Cristóbal Guzmán
422
2
0
24 Mar 2024
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Aaron Mishkin
Ahmed Khaled
Yuanhao Wang
Aaron Defazio
Robert Mansel Gower
133
9
0
06 Mar 2024
Revisiting Convergence of AdaGrad with Relaxed Assumptions
Revisiting Convergence of AdaGrad with Relaxed Assumptions
Yusu Hong
Junhong Lin
51
13
0
21 Feb 2024
Stochastic Weakly Convex Optimization Beyond Lipschitz Continuity
Stochastic Weakly Convex Optimization Beyond Lipschitz Continuity
Wenzhi Gao
Qi Deng
61
1
0
25 Jan 2024
A Unified Analysis for the Subgradient Methods Minimizing Composite
  Nonconvex, Nonsmooth and Non-Lipschitz Functions
A Unified Analysis for the Subgradient Methods Minimizing Composite Nonconvex, Nonsmooth and Non-Lipschitz Functions
Daoli Zhu
Lei Zhao
Shuzhong Zhang
111
2
0
30 Aug 2023
Normalized Gradients for All
Normalized Gradients for All
Francesco Orabona
108
10
0
10 Aug 2023
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent
  Method
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Ahmed Khaled
Konstantin Mishchenko
Chi Jin
ODL
86
28
0
25 May 2023
Revisiting Subgradient Method: Complexity and Convergence Beyond
  Lipschitz Continuity
Revisiting Subgradient Method: Complexity and Convergence Beyond Lipschitz Continuity
Xiao Li
Lei Zhao
Daoli Zhu
Anthony Man-Cho So
23
3
0
23 May 2023
Gauges and Accelerated Optimization over Smooth and/or Strongly Convex
  Sets
Gauges and Accelerated Optimization over Smooth and/or Strongly Convex Sets
Ning Liu
Benjamin Grimmer
95
4
0
09 Mar 2023
Randomized Coordinate Subgradient Method for Nonsmooth Composite
  Optimization
Randomized Coordinate Subgradient Method for Nonsmooth Composite Optimization
Lei Zhao
Ding-Yuan Chen
Daoli Zhu
Xiao Li
74
1
0
30 Jun 2022
Stochastic Approximation with Discontinuous Dynamics, Differential
  Inclusions, and Applications
Stochastic Approximation with Discontinuous Dynamics, Differential Inclusions, and Applications
N. Nguyen
G. Yin
55
7
0
28 Aug 2021
Regret Bounds without Lipschitz Continuity: Online Learning with
  Relative-Lipschitz Losses
Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
Yihan Zhou
V. S. Portella
Mark Schmidt
Nicholas J. A. Harvey
55
21
0
22 Oct 2020
Mitigating Sybil Attacks on Differential Privacy based Federated
  Learning
Mitigating Sybil Attacks on Differential Privacy based Federated Learning
Yupeng Jiang
Yong Li
Yipeng Zhou
Xi Zheng
FedMLAAML
65
15
0
20 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
87
0
0
26 Aug 2020
Unified Analysis of Stochastic Gradient Methods for Composite Convex and
  Smooth Optimization
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Ahmed Khaled
Othmane Sebbouh
Nicolas Loizou
Robert Mansel Gower
Peter Richtárik
119
47
0
20 Jun 2020
A Better Alternative to Error Feedback for Communication-Efficient
  Distributed Learning
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
79
60
0
19 Jun 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
107
187
0
09 Feb 2020
Unified Optimal Analysis of the (Stochastic) Gradient Method
Unified Optimal Analysis of the (Stochastic) Gradient Method
Sebastian U. Stich
82
113
0
09 Jul 2019
The Value of Collaboration in Convex Machine Learning with Differential
  Privacy
The Value of Collaboration in Convex Machine Learning with Differential Privacy
Nan Wu
Farhad Farokhi
David B. Smith
M. Kâafar
FedML
78
99
0
24 Jun 2019
1