ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.07436
  4. Cited By
A Dynamical Systems Perspective on Nesterov Acceleration

A Dynamical Systems Perspective on Nesterov Acceleration

International Conference on Machine Learning (ICML), 2019
17 May 2019
Michael Muehlebach
Sai Li
ArXiv (abs)PDFHTML

Papers citing "A Dynamical Systems Perspective on Nesterov Acceleration"

44 / 44 papers shown
Title
Continuous-Time Analysis of Heavy Ball Momentum in Min-Max Games
Continuous-Time Analysis of Heavy Ball Momentum in Min-Max Games
Yi-Hu Feng
Kaito Fujii
Stratis Skoulakis
Xiao Wang
Volkan Cevher
213
0
0
26 May 2025
Learning by solving differential equations
Learning by solving differential equations
Benoit Dherin
Michael Munn
Hanna Mazzawi
Michael Wunder
Sourabh Medapati
Javier Gonzalvo
238
0
0
19 May 2025
Error estimates between SGD with momentum and underdamped Langevin
  diffusion
Error estimates between SGD with momentum and underdamped Langevin diffusion
Arnaud Guillin
Yu Wang
Lihu Xu
Haoran Yang
144
4
0
22 Oct 2024
DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise ReductionInternational Conference on Learning Representations (ICLR), 2024
Xinwei Zhang
Zhiqi Bu
Borja Balle
Mingyi Hong
Meisam Razaviyayn
Vahab Mirrokni
313
5
0
04 Oct 2024
Is All Learning (Natural) Gradient Descent?
Is All Learning (Natural) Gradient Descent?
Lucas Shoji
Kenta Suzuki
Leo Kozachkov
138
4
0
24 Sep 2024
Generalized Continuous-Time Models for Nesterov's Accelerated Gradient
  Methods
Generalized Continuous-Time Models for Nesterov's Accelerated Gradient Methods
Chanwoong Park
Youngchae Cho
Insoon Yang
186
1
0
02 Sep 2024
Accelerated forward-backward and Douglas-Rachford splitting dynamics
Accelerated forward-backward and Douglas-Rachford splitting dynamics
Ibrahim Kurban Özaslan
Mihailo R. Jovanović
86
2
0
30 Jul 2024
Distributed Event-Based Learning via ADMM
Distributed Event-Based Learning via ADMM
Güner Dilsad Er
Sebastian Trimpe
Michael Muehlebach
FedML
341
3
0
17 May 2024
A Variational Perspective on High-Resolution ODEs
A Variational Perspective on High-Resolution ODEsNeural Information Processing Systems (NeurIPS), 2023
Hoomaan Maskan
K. C. Zygalakis
A. Yurtsever
250
5
0
03 Nov 2023
Towards Hyperparameter-Agnostic DNN Training via Dynamical System
  Insights
Towards Hyperparameter-Agnostic DNN Training via Dynamical System Insights
Carmel Fiscko
Aayushya Agarwal
Yihan Ruan
S. Kar
L. Pileggi
Bruno Sinopoli
158
1
0
21 Oct 2023
Accelerating optimization over the space of probability measures
Accelerating optimization over the space of probability measures
Shi Chen
Wenxuan Wu
Yuhang Yao
Stephen J. Wright
326
8
0
06 Oct 2023
Accelerated Optimization Landscape of Linear-Quadratic Regulator
Accelerated Optimization Landscape of Linear-Quadratic Regulator
Le Feng
Yuan‐Hua Ni
255
1
0
07 Jul 2023
Linear convergence of forward-backward accelerated algorithms without
  knowledge of the modulus of strong convexity
Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity
Bowen Li
Bin Shi
Ya-xiang Yuan
322
16
0
16 Jun 2023
On the connections between optimization algorithms, Lyapunov functions,
  and differential equations: theory and insights
On the connections between optimization algorithms, Lyapunov functions, and differential equations: theory and insightsSIAM Journal on Optimization (SIOPT), 2023
P. Dobson
J. Sanz-Serna
K. Zygalakis
202
5
0
15 May 2023
On Underdamped Nesterov's Acceleration
On Underdamped Nesterov's Acceleration
Shu Chen
Bin Shi
Ya-xiang Yuan
243
5
0
28 Apr 2023
Accelerated First-Order Optimization under Nonlinear Constraints
Accelerated First-Order Optimization under Nonlinear ConstraintsMathematical programming (Math. Program.), 2023
Michael Muehlebach
Michael I. Jordan
399
6
0
01 Feb 2023
A Nonstochastic Control Approach to Optimization
A Nonstochastic Control Approach to Optimization
Xinyi Chen
Elad Hazan
394
5
0
19 Jan 2023
Neuroevolution of Physics-Informed Neural Nets: Benchmark Problems and
  Comparative Results
Neuroevolution of Physics-Informed Neural Nets: Benchmark Problems and Comparative Results
Nicholas Sung
Jian Cheng Wong
C. Ooi
Abhishek Gupta
P. Chiu
Yew-Soon Ong
PINN
143
10
0
15 Dec 2022
Revisiting the acceleration phenomenon via high-resolution differential
  equations
Revisiting the acceleration phenomenon via high-resolution differential equations
Shu Chen
Bin Shi
Ya-xiang Yuan
211
9
0
12 Dec 2022
NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizer
NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizer
Valentin Leplat
D. Merkulov
Aleksandr Katrutsa
Daniel Bershatsky
Olga Tsymboi
Ivan Oseledets
292
4
0
29 Sep 2022
Tradeoffs between convergence rate and noise amplification for
  momentum-based accelerated optimization algorithms
Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithmsIEEE Transactions on Automatic Control (TAC), 2022
Hesameddin Mohammadi
Meisam Razaviyayn
Mihailo R. Jovanović
183
11
0
24 Sep 2022
Gradient Norm Minimization of Nesterov Acceleration: $o(1/k^3)$
Gradient Norm Minimization of Nesterov Acceleration: o(1/k3)o(1/k^3)o(1/k3)
Shu Chen
Bin Shi
Ya-xiang Yuan
170
18
0
19 Sep 2022
Continuous-time Analysis for Variational Inequalities: An Overview and
  Desiderata
Continuous-time Analysis for Variational Inequalities: An Overview and Desiderata
Tatjana Chavdarova
Ya-Ping Hsieh
Michael I. Jordan
140
2
0
14 Jul 2022
Understanding A Class of Decentralized and Federated Optimization
  Algorithms: A Multi-Rate Feedback Control Perspective
Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control PerspectiveSIAM Journal on Optimization (SIAM J. Optim.), 2022
Xinwei Zhang
Mingyi Hong
N. Elia
FedML
172
6
0
27 Apr 2022
Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD
  with Momentum
Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with MomentumInternational Conference on Learning Representations (ICLR), 2022
Kirby Banman
Liam Peet-Paré
N. Hegde
Alona Fyshe
Martha White
176
0
0
22 Mar 2022
A closed loop gradient descent algorithm applied to Rosenbrock's
  function
A closed loop gradient descent algorithm applied to Rosenbrock's functionAustralian and New Zealand Control Conference (ANZCC), 2021
Subhransu S. Bhattacharjee
I. Petersen
399
5
0
29 Aug 2021
On Constraints in First-Order Optimization: A View from Non-Smooth
  Dynamical Systems
On Constraints in First-Order Optimization: A View from Non-Smooth Dynamical SystemsJournal of machine learning research (JMLR), 2021
Michael Muehlebach
Sai Li
243
26
0
17 Jul 2021
A Continuized View on Nesterov Acceleration for Stochastic Gradient
  Descent and Randomized Gossip
A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip
Mathieu Even
Raphael Berthier
Francis R. Bach
Nicolas Flammarion
Pierre Gaillard
Aymeric Dieuleveut
Laurent Massoulié
Adrien B. Taylor
229
24
0
10 Jun 2021
A Contraction Theory Approach to Optimization Algorithms from
  Acceleration Flows
A Contraction Theory Approach to Optimization Algorithms from Acceleration FlowsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Pedro Cisneros-Velarde
Francesco Bullo
294
7
0
18 May 2021
Revisiting the Role of Euler Numerical Integration on Acceleration and
  Stability in Convex Optimization
Revisiting the Role of Euler Numerical Integration on Acceleration and Stability in Convex OptimizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Peiyuan Zhang
Antonio Orvieto
Hadi Daneshmand
Thomas Hofmann
Roy S. Smith
107
11
0
23 Feb 2021
LEAD: Min-Max Optimization from a Physical Perspective
LEAD: Min-Max Optimization from a Physical Perspective
Reyhane Askari Hemmat
Amartya Mitra
Guillaume Lajoie
Alexia Jolicoeur-Martineau
389
1
0
26 Oct 2020
The connections between Lyapunov functions for some optimization
  algorithms and differential equations
The connections between Lyapunov functions for some optimization algorithms and differential equationsSIAM Journal on Numerical Analysis (SINUM), 2020
J. Sanz-Serna
K. Zygalakis
208
27
0
01 Sep 2020
Continuous-in-Depth Neural Networks
Continuous-in-Depth Neural Networks
A. Queiruga
N. Benjamin Erichson
D. Taylor
Michael W. Mahoney
253
54
0
05 Aug 2020
Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
  and Fingerprints Structural Malware
Noise-Response Analysis of Deep Neural Networks Quantifies Robustness and Fingerprints Structural Malware
N. Benjamin Erichson
D. Taylor
Qixuan Wu
Michael W. Mahoney
AAML
176
13
0
31 Jul 2020
Meta Learning in the Continuous Time Limit
Meta Learning in the Continuous Time Limit
Ruitu Xu
Lin Chen
Amin Karbasi
133
15
0
19 Jun 2020
Neural Differential Equations for Single Image Super-resolution
Neural Differential Equations for Single Image Super-resolutionInternational Conference on Learning Representations (ICLR), 2020
Teven Le Scao
123
2
0
02 May 2020
On dissipative symplectic integration with applications to
  gradient-based optimization
On dissipative symplectic integration with applications to gradient-based optimizationJournal of Statistical Mechanics: Theory and Experiment (JSTAT), 2020
G. Francca
Sai Li
René Vidal
269
48
0
15 Apr 2020
Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic
  Perspectives
Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic PerspectivesJournal of machine learning research (JMLR), 2020
Michael Muehlebach
Sai Li
183
67
0
28 Feb 2020
Implicit Regularization and Momentum Algorithms in Nonlinearly
  Parameterized Adaptive Control and Prediction
Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and PredictionNeural Computation (Neural Comput.), 2019
Nicholas M. Boffi
Jean-Jacques E. Slotine
227
43
0
31 Dec 2019
Bregman dynamics, contact transformations and convex optimization
Bregman dynamics, contact transformations and convex optimizationInformation Geometry (IG), 2019
A. Bravetti
M. Daza-Torres
Hugo Flores-Arguedas
M. Betancourt
199
1
0
06 Dec 2019
Proximal gradient flow and Douglas-Rachford splitting dynamics: global
  exponential stability via integral quadratic constraints
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
Sepideh Hassan-Moghaddam
Mihailo R. Jovanović
115
0
0
23 Aug 2019
Efficient stochastic optimisation by unadjusted Langevin Monte Carlo.
  Application to maximum marginal likelihood and empirical Bayesian estimation
Efficient stochastic optimisation by unadjusted Langevin Monte Carlo. Application to maximum marginal likelihood and empirical Bayesian estimationStatistics and computing (Stat. Comput.), 2019
Valentin De Bortoli
Alain Durmus
Marcelo Pereyra
A. F. Vidal
245
37
0
28 Jun 2019
Generalized Momentum-Based Methods: A Hamiltonian Perspective
Generalized Momentum-Based Methods: A Hamiltonian PerspectiveSIAM Journal on Optimization (SIOPT), 2019
Jelena Diakonikolas
Sai Li
258
62
0
02 Jun 2019
New optimization algorithms for neural network training using operator
  splitting techniques
New optimization algorithms for neural network training using operator splitting techniquesNeural Networks (NN), 2019
C. Alecsa
Titus Pinta
Imre Boros
141
9
0
29 Apr 2019
1