ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02565
  4. Cited By
Continuous-time Models for Stochastic Optimization Algorithms
v1v2v3 (latest)

Continuous-time Models for Stochastic Optimization Algorithms

5 October 2018
Antonio Orvieto
Aurelien Lucchi
ArXiv (abs)PDFHTML

Papers citing "Continuous-time Models for Stochastic Optimization Algorithms"

12 / 12 papers shown
Title
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Matthew Burns
Qingyuan Hou
Michael Huang
841
1
0
08 Oct 2024
Continuous-time Riemannian SGD and SVRG Flows on Wasserstein Probabilistic Space
Continuous-time Riemannian SGD and SVRG Flows on Wasserstein Probabilistic Space
Mingyang Yi
Bohan Wang
317
0
0
24 Jan 2024
Ito Diffusion Approximation of Universal Ito Chains for Sampling,
  Optimization and Boosting
Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and BoostingInternational Conference on Learning Representations (ICLR), 2023
Aleksei Ustimenko
Aleksandr Beznosikov
304
1
0
09 Oct 2023
An SDE for Modeling SAM: Theory and Insights
An SDE for Modeling SAM: Theory and InsightsInternational Conference on Machine Learning (ICML), 2023
Enea Monzio Compagnoni
Luca Biggio
Antonio Orvieto
F. Proske
Hans Kersting
Aurelien Lucchi
250
21
0
19 Jan 2023
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
  Evasion of Non-Degenerate Saddle Points
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle PointsIEEE Transactions on Automatic Control (TAC), 2022
Mayank Baranwal
Param Budhraja
V. Raj
A. Hota
217
3
0
07 Dec 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient DescentNeural Information Processing Systems (NeurIPS), 2022
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
122
5
0
13 Oct 2022
Understanding A Class of Decentralized and Federated Optimization
  Algorithms: A Multi-Rate Feedback Control Perspective
Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control PerspectiveSIAM Journal on Optimization (SIAM J. Optim.), 2022
Xinwei Zhang
Mingyi Hong
N. Elia
FedML
180
6
0
27 Apr 2022
Generalization Bounds using Lower Tail Exponents in Stochastic
  Optimizers
Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers
Liam Hodgkinson
Umut Simsekli
Rajiv Khanna
Michael W. Mahoney
306
27
0
02 Aug 2021
Free-rider Attacks on Model Aggregation in Federated Learning
Free-rider Attacks on Model Aggregation in Federated Learning
Yann Fraboni
Richard Vidal
Marco Lorenzi
FedML
291
153
0
21 Jun 2020
Convergence rates and approximation results for SGD and its
  continuous-time counterpart
Convergence rates and approximation results for SGD and its continuous-time counterpart
Xavier Fontaine
Valentin De Bortoli
Alain Durmus
131
7
0
08 Apr 2020
Shadowing Properties of Optimization Algorithms
Shadowing Properties of Optimization AlgorithmsNeural Information Processing Systems (NeurIPS), 2019
Antonio Orvieto
Aurelien Lucchi
137
19
0
12 Nov 2019
The Role of Memory in Stochastic Optimization
The Role of Memory in Stochastic OptimizationConference on Uncertainty in Artificial Intelligence (UAI), 2019
Antonio Orvieto
Jonas Köhler
Aurelien Lucchi
160
32
0
02 Jul 2019
1