ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.01678
  4. Cited By
The Role of Memory in Stochastic Optimization

The Role of Memory in Stochastic Optimization

2 July 2019
Antonio Orvieto
Jonas Köhler
Aurelien Lucchi
ArXivPDFHTML

Papers citing "The Role of Memory in Stochastic Optimization"

12 / 12 papers shown
Title
Almost sure convergence rates of stochastic gradient methods under gradient domination
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
39
3
0
22 May 2024
Leveraging Continuous Time to Understand Momentum When Training Diagonal
  Linear Networks
Leveraging Continuous Time to Understand Momentum When Training Diagonal Linear Networks
Hristo Papazov
Scott Pesme
Nicolas Flammarion
38
5
0
08 Mar 2024
On Almost Sure Convergence Rates of Stochastic Gradient Methods
On Almost Sure Convergence Rates of Stochastic Gradient Methods
Jun Liu
Ye Yuan
26
36
0
09 Feb 2022
Revisiting the Role of Euler Numerical Integration on Acceleration and
  Stability in Convex Optimization
Revisiting the Role of Euler Numerical Integration on Acceleration and Stability in Convex Optimization
Peiyuan Zhang
Antonio Orvieto
Hadi Daneshmand
Thomas Hofmann
Roy S. Smith
32
9
0
23 Feb 2021
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
42
162
0
03 Jul 2020
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Umut Simsekli
Ozan Sener
George Deligiannidis
Murat A. Erdogdu
49
55
0
16 Jun 2020
Almost sure convergence rates for Stochastic Gradient Descent and
  Stochastic Heavy Ball
Almost sure convergence rates for Stochastic Gradient Descent and Stochastic Heavy Ball
Othmane Sebbouh
Robert Mansel Gower
Aaron Defazio
22
22
0
14 Jun 2020
Quasi-hyperbolic momentum and Adam for deep learning
Quasi-hyperbolic momentum and Adam for deep learning
Jerry Ma
Denis Yarats
ODL
84
129
0
16 Oct 2018
Continuous-time Models for Stochastic Optimization Algorithms
Continuous-time Models for Stochastic Optimization Algorithms
Antonio Orvieto
Aurelien Lucchi
19
31
0
05 Oct 2018
On the Generalization of Stochastic Gradient Descent with Momentum
On the Generalization of Stochastic Gradient Descent with Momentum
Ali Ramezani-Kebrya
Kimon Antonakopoulos
V. Cevher
Ashish Khisti
Ben Liang
MLT
19
24
0
12 Sep 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
139
1,205
0
16 Aug 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,157
0
04 Mar 2015
1