ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.04232
  4. Cited By
Unified Optimal Analysis of the (Stochastic) Gradient Method

Unified Optimal Analysis of the (Stochastic) Gradient Method

9 July 2019
Sebastian U. Stich
ArXivPDFHTML

Papers citing "Unified Optimal Analysis of the (Stochastic) Gradient Method"

18 / 18 papers shown
Title
An Enhanced Zeroth-Order Stochastic Frank-Wolfe Framework for Constrained Finite-Sum Optimization
An Enhanced Zeroth-Order Stochastic Frank-Wolfe Framework for Constrained Finite-Sum Optimization
Haishan Ye
Yinghui Huang
Hao Di
Xiangyu Chang
38
0
0
13 Jan 2025
Demystifying SGD with Doubly Stochastic Gradients
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim
Joohwan Ko
Yian Ma
Jacob R. Gardner
48
0
0
03 Jun 2024
The Limits and Potentials of Local SGD for Distributed Heterogeneous
  Learning with Intermittent Communication
The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication
Kumar Kshitij Patel
Margalit Glasgow
Ali Zindari
Lingxiao Wang
Sebastian U. Stich
Ziheng Cheng
Nirmit Joshi
Nathan Srebro
39
6
0
19 May 2024
Towards Efficient Risk-Sensitive Policy Gradient: An Iteration Complexity Analysis
Towards Efficient Risk-Sensitive Policy Gradient: An Iteration Complexity Analysis
Rui Liu
Erfaun Noorani
Pratap Tokekar
John S. Baras
18
1
0
13 Mar 2024
Provably Scalable Black-Box Variational Inference with Structured
  Variational Families
Provably Scalable Black-Box Variational Inference with Structured Variational Families
Joohwan Ko
Kyurae Kim
W. Kim
Jacob R. Gardner
BDL
20
1
0
19 Jan 2024
Provable convergence guarantees for black-box variational inference
Provable convergence guarantees for black-box variational inference
Justin Domke
Guillaume Garrigos
Robert Mansel Gower
13
18
0
04 Jun 2023
First Order Methods with Markovian Noise: from Acceleration to
  Variational Inequalities
First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities
Aleksandr Beznosikov
S. Samsonov
Marina Sheshukova
Alexander Gasnikov
A. Naumov
Eric Moulines
32
14
0
25 May 2023
Tackling benign nonconvexity with smoothing and stochastic gradients
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
11
8
0
18 Feb 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
17
47
0
15 Feb 2022
Stochastic Polyak Stepsize with a Moving Target
Stochastic Polyak Stepsize with a Moving Target
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
13
17
0
22 Jun 2021
Stochastic gradient descent with noise of machine learning type. Part I:
  Discrete time analysis
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis
Stephan Wojtowytsch
21
50
0
04 May 2021
Random Reshuffling: Simple Analysis with Vast Improvements
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
18
129
0
10 Jun 2020
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Blake E. Woodworth
Kumar Kshitij Patel
Nathan Srebro
FedML
11
198
0
08 Jun 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
21
489
0
23 Mar 2020
Is Local SGD Better than Minibatch SGD?
Is Local SGD Better than Minibatch SGD?
Blake E. Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. B. McMahan
Ohad Shamir
Nathan Srebro
FedML
18
253
0
18 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
11
178
0
09 Feb 2020
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
111
259
0
10 Dec 2012
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
570
0
08 Dec 2012
1