ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.06386
  4. Cited By
Bridging the Gap between Constant Step Size Stochastic Gradient Descent
  and Markov Chains
v1v2 (latest)

Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains

20 July 2017
Aymeric Dieuleveut
Alain Durmus
Francis R. Bach
ArXiv (abs)PDFHTML

Papers citing "Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains"

50 / 106 papers shown
Title
QLSD: Quantised Langevin stochastic dynamics for Bayesian federated
  learning
QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning
Maxime Vono
Vincent Plassier
Alain Durmus
Aymeric Dieuleveut
Eric Moulines
FedML
93
36
0
01 Jun 2021
Near-optimal Offline and Streaming Algorithms for Learning Non-Linear
  Dynamical Systems
Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems
Prateek Jain
S. Kowshik
Dheeraj M. Nagaraj
Praneeth Netrapalli
OffRL
74
23
0
24 May 2021
Quantifying the mini-batching error in Bayesian inference for Adaptive
  Langevin dynamics
Quantifying the mini-batching error in Bayesian inference for Adaptive Langevin dynamics
Inass Sekkat
G. Stoltz
62
4
0
21 May 2021
Stochastic gradient descent with noise of machine learning type. Part I:
  Discrete time analysis
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis
Stephan Wojtowytsch
69
52
0
04 May 2021
Repurposing Pretrained Models for Robust Out-of-domain Few-Shot Learning
Repurposing Pretrained Models for Robust Out-of-domain Few-Shot Learning
Namyeong Kwon
Hwidong Na
Gabriel Huang
Simon Lacoste-Julien
55
7
0
16 Mar 2021
On Riemannian Stochastic Approximation Schemes with Fixed Step-Size
On Riemannian Stochastic Approximation Schemes with Fixed Step-Size
Alain Durmus
P. Jiménez
Eric Moulines
Salem Said
56
12
0
15 Feb 2021
Strength of Minibatch Noise in SGD
Strength of Minibatch Noise in SGD
Liu Ziyin
Kangqiao Liu
Takashi Mori
Masakuni Ueda
ODLMLT
64
35
0
10 Feb 2021
Statistical Inference for Polyak-Ruppert Averaged Zeroth-order
  Stochastic Gradient Algorithm
Statistical Inference for Polyak-Ruppert Averaged Zeroth-order Stochastic Gradient Algorithm
Yanhao Jin
Tesi Xiao
Krishnakumar Balasubramanian
72
6
0
10 Feb 2021
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient
  Descent
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu
Liu Ziyin
Masakuni Ueda
MLT
149
39
0
07 Dec 2020
Robust, Accurate Stochastic Optimization for Variational Inference
Robust, Accurate Stochastic Optimization for Variational Inference
Akash Kumar Dhaka
Alejandro Catalina
Michael Riis Andersen
Maans Magnusson
Jonathan H. Huggins
Aki Vehtari
71
34
0
01 Sep 2020
Stochastic Multi-level Composition Optimization Algorithms with
  Level-Independent Convergence Rates
Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates
Krishnakumar Balasubramanian
Saeed Ghadimi
A. Nguyen
127
34
0
24 Aug 2020
Regret Analysis of a Markov Policy Gradient Algorithm for Multi-arm
  Bandits
Regret Analysis of a Markov Policy Gradient Algorithm for Multi-arm Bandits
D. Denisov
N. Walton
80
8
0
20 Jul 2020
Weak error analysis for stochastic gradient descent optimization
  algorithms
Weak error analysis for stochastic gradient descent optimization algorithms
A. Bercher
Lukas Gonon
Arnulf Jentzen
Diyora Salimova
66
4
0
03 Jul 2020
Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
  Optimization Problems
Balancing Rates and Variance via Adaptive Batch-Size for Stochastic Optimization Problems
Zhan Gao
Alec Koppel
Alejandro Ribeiro
65
10
0
02 Jul 2020
On Convergence-Diagnostic based Step Sizes for Stochastic Gradient
  Descent
On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent
Scott Pesme
Aymeric Dieuleveut
Nicolas Flammarion
69
16
0
01 Jul 2020
Bidirectional compression in heterogeneous settings for distributed or
  federated learning with partial participation: tight convergence guarantees
Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees
Constantin Philippenko
Aymeric Dieuleveut
FedML
93
51
0
25 Jun 2020
Taming GANs with Lookahead-Minmax
Taming GANs with Lookahead-Minmax
Tatjana Chavdarova
Matteo Pagliardini
Sebastian U. Stich
François Fleuret
Martin Jaggi
GAN
61
27
0
25 Jun 2020
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Umut Simsekli
Ozan Sener
George Deligiannidis
Murat A. Erdogdu
86
56
0
16 Jun 2020
An Analysis of Constant Step Size SGD in the Non-convex Regime:
  Asymptotic Normality and Bias
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
106
52
0
14 Jun 2020
The Heavy-Tail Phenomenon in SGD
The Heavy-Tail Phenomenon in SGD
Mert Gurbuzbalaban
Umut Simsekli
Lingjiong Zhu
59
130
0
08 Jun 2020
Asymptotic Analysis of Conditioned Stochastic Gradient Descent
Asymptotic Analysis of Conditioned Stochastic Gradient Descent
Rémi Leluc
Franccois Portier
75
4
0
04 Jun 2020
SDE approximations of GANs training and its long-run behavior
SDE approximations of GANs training and its long-run behavior
Haoyang Cao
Xin Guo
57
1
0
03 Jun 2020
Analysis of Stochastic Gradient Descent in Continuous Time
Analysis of Stochastic Gradient Descent in Continuous Time
J. Latz
81
41
0
15 Apr 2020
On Learning Rates and Schrödinger Operators
On Learning Rates and Schrödinger Operators
Bin Shi
Weijie J. Su
Michael I. Jordan
90
61
0
15 Apr 2020
On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
  Non-Asymptotic Concentration
On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration
Wenlong Mou
C. J. Li
Martin J. Wainwright
Peter L. Bartlett
Michael I. Jordan
85
76
0
09 Apr 2020
A Distributional Analysis of Sampling-Based Reinforcement Learning
  Algorithms
A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms
Philip Amortila
Doina Precup
Prakash Panangaden
Marc G. Bellemare
25
9
0
27 Mar 2020
Convergence of Recursive Stochastic Algorithms using Wasserstein
  Divergence
Convergence of Recursive Stochastic Algorithms using Wasserstein Divergence
Abhishek Gupta
W. Haskell
31
5
0
25 Mar 2020
Online stochastic gradient descent on non-convex losses from
  high-dimensional inference
Online stochastic gradient descent on non-convex losses from high-dimensional inference
Gerard Ben Arous
Reza Gheissari
Aukosh Jagannath
108
91
0
23 Mar 2020
The Implicit Regularization of Stochastic Gradient Flow for Least
  Squares
The Implicit Regularization of Stochastic Gradient Flow for Least Squares
Alnur Ali
Yan Sun
Robert Tibshirani
103
77
0
17 Mar 2020
Batch Normalization Provably Avoids Rank Collapse for Randomly
  Initialised Deep Networks
Batch Normalization Provably Avoids Rank Collapse for Randomly Initialised Deep Networks
Hadi Daneshmand
Jonas Köhler
Francis R. Bach
Thomas Hofmann
Aurelien Lucchi
OODODL
46
4
0
03 Mar 2020
Debiasing Stochastic Gradient Descent to handle missing values
Debiasing Stochastic Gradient Descent to handle missing values
Julie Josse
Aude Sportisse
Claire Boyer
Aymeric Dieuleveut
16
2
0
21 Feb 2020
On the Effectiveness of Richardson Extrapolation in Machine Learning
On the Effectiveness of Richardson Extrapolation in Machine Learning
Francis R. Bach
55
9
0
07 Feb 2020
Mixing of Stochastic Accelerated Gradient Descent
Mixing of Stochastic Accelerated Gradient Descent
Peiyuan Zhang
Hadi Daneshmand
Thomas Hofmann
27
0
0
31 Oct 2019
Understanding the Role of Momentum in Stochastic Gradient Methods
Understanding the Role of Momentum in Stochastic Gradient Methods
Igor Gitman
Hunter Lang
Pengchuan Zhang
Lin Xiao
77
95
0
30 Oct 2019
Online Stochastic Gradient Descent with Arbitrary Initialization Solves
  Non-smooth, Non-convex Phase Retrieval
Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval
Yan Shuo Tan
Roman Vershynin
79
35
0
28 Oct 2019
Robust Distributed Accelerated Stochastic Gradient Methods for
  Multi-Agent Networks
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
Umut Simsekli
Lingjiong Zhu
102
28
0
19 Oct 2019
Robust Learning Rate Selection for Stochastic Optimization via Splitting
  Diagnostic
Robust Learning Rate Selection for Stochastic Optimization via Splitting Diagnostic
Matteo Sordello
Niccolò Dalmasso
Hangfeng He
Weijie Su
66
7
0
18 Oct 2019
Error Lower Bounds of Constant Step-size Stochastic Gradient Descent
Error Lower Bounds of Constant Step-size Stochastic Gradient Descent
Zhiyan Ding
Yiding Chen
Qin Li
Xiaojin Zhu
49
4
0
18 Oct 2019
Optimizing Nondecomposable Data Dependent Regularizers via Lagrangian
  Reparameterization offers Significant Performance and Efficiency Gains
Optimizing Nondecomposable Data Dependent Regularizers via Lagrangian Reparameterization offers Significant Performance and Efficiency Gains
Sathya Ravi
Abhay Venkatesh
G. Fung
Vikas Singh
41
3
0
26 Sep 2019
A generalization of regularized dual averaging and its dynamics
A generalization of regularized dual averaging and its dynamics
Shih-Kang Chao
Guang Cheng
58
18
0
22 Sep 2019
Using Statistics to Automate Stochastic Optimization
Using Statistics to Automate Stochastic Optimization
Hunter Lang
Pengchuan Zhang
Lin Xiao
86
22
0
21 Sep 2019
Continuous Time Analysis of Momentum Methods
Continuous Time Analysis of Momentum Methods
Nikola B. Kovachki
Andrew M. Stuart
131
36
0
10 Jun 2019
Reducing the variance in online optimization by transporting past
  gradients
Reducing the variance in online optimization by transporting past gradients
Sébastien M. R. Arnold
Pierre-Antoine Manzagol
Reza Babanezhad
Ioannis Mitliagkas
Nicolas Le Roux
84
28
0
08 Jun 2019
Communication trade-offs for synchronized distributed SGD with large
  step size
Communication trade-offs for synchronized distributed SGD with large step size
Kumar Kshitij Patel
Aymeric Dieuleveut
FedML
66
27
0
25 Apr 2019
Some Limit Properties of Markov Chains Induced by Stochastic Recursive
  Algorithms
Some Limit Properties of Markov Chains Induced by Stochastic Recursive Algorithms
Abhishek Gupta
Hao Chen
Jianzong Pi
Gaurav Tendolkar
28
0
0
24 Apr 2019
Convergence rates for the stochastic gradient descent method for
  non-convex objective functions
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
98
101
0
02 Apr 2019
Uniform-in-Time Weak Error Analysis for Stochastic Gradient Descent
  Algorithms via Diffusion Approximation
Uniform-in-Time Weak Error Analysis for Stochastic Gradient Descent Algorithms via Diffusion Approximation
Yuanyuan Feng
Tingran Gao
Lei Li
Jian‐Guo Liu
Yulong Lu
70
25
0
02 Feb 2019
Accelerated Linear Convergence of Stochastic Momentum Methods in
  Wasserstein Distances
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances
Bugra Can
Mert Gurbuzbalaban
Lingjiong Zhu
102
45
0
22 Jan 2019
The promises and pitfalls of Stochastic Gradient Langevin Dynamics
The promises and pitfalls of Stochastic Gradient Langevin Dynamics
N. Brosse
Alain Durmus
Eric Moulines
88
78
0
25 Nov 2018
Gen-Oja: A Two-time-scale approach for Streaming CCA
Gen-Oja: A Two-time-scale approach for Streaming CCA
Kush S. Bhatia
Aldo Pacchiano
Nicolas Flammarion
Peter L. Bartlett
Michael I. Jordan
61
2
0
20 Nov 2018
Previous
123
Next