ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11144
  4. Cited By
On the Almost Sure Convergence of Stochastic Gradient Descent in
  Non-Convex Problems

On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems

19 June 2020
P. Mertikopoulos
Nadav Hallak
Ali Kavis
V. Cevher
ArXivPDFHTML

Papers citing "On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems"

50 / 60 papers shown
Title
Spike-timing-dependent Hebbian learning as noisy gradient descent
Spike-timing-dependent Hebbian learning as noisy gradient descent
Niklas Dexheimer
Sascha Gaudlitz
Johannes Schmidt-Hieber
23
0
0
15 May 2025
Stochastic Gradient Descent in Non-Convex Problems: Asymptotic Convergence with Relaxed Step-Size via Stopping Time Methods
Stochastic Gradient Descent in Non-Convex Problems: Asymptotic Convergence with Relaxed Step-Size via Stopping Time Methods
Ruinan Jin
Difei Cheng
Hong Qiao
Xin Shi
Shaodong Liu
Bo Zhang
21
0
0
17 Apr 2025
A Near Complete Nonasymptotic Generalization Theory For Multilayer Neural Networks: Beyond the Bias-Variance Tradeoff
Hao Yu
Xiangyang Ji
AI4CE
58
0
0
03 Mar 2025
Nesterov acceleration in benignly non-convex landscapes
Nesterov acceleration in benignly non-convex landscapes
Kanan Gupta
Stephan Wojtowytsch
36
2
0
10 Oct 2024
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent
  Algorithm
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent Algorithm
Jinwei Zhao
Marco Gori
Alessandro Betti
S. Melacci
Hongtao Zhang
Jiedong Liu
Xinhong Hei
28
0
0
10 Sep 2024
Lyapunov weights to convey the meaning of time in physics-informed
  neural networks
Lyapunov weights to convey the meaning of time in physics-informed neural networks
Gabriel Turinici
21
0
0
31 Jul 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
34
3
0
22 May 2024
Uncertainty quantification by block bootstrap for differentially private
  stochastic gradient descent
Uncertainty quantification by block bootstrap for differentially private stochastic gradient descent
Holger Dette
Carina Graw
18
0
0
21 May 2024
Optimal time sampling in physics-informed neural networks
Optimal time sampling in physics-informed neural networks
Gabriel Turinici
PINN
11
1
0
29 Apr 2024
Federated reinforcement learning for robot motion planning with
  zero-shot generalization
Federated reinforcement learning for robot motion planning with zero-shot generalization
Zhenyuan Yuan
Siyuan Xu
Minghui Zhu
FedML
21
1
0
20 Mar 2024
Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth
  and Data Heterogeneity
Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth and Data Heterogeneity
Yiyue Chen
H. Vikalo
C. Wang
FedML
39
5
0
20 Dec 2023
Learning Unorthogonalized Matrices for Rotation Estimation
Learning Unorthogonalized Matrices for Rotation Estimation
Kerui Gu
Zhihao Li
Shiyong Liu
Jianzhuang Liu
Songcen Xu
Youliang Yan
Michael Bi Mi
Kenji Kawaguchi
Angela Yao
30
1
0
01 Dec 2023
Adam-like Algorithm with Smooth Clipping Attains Global Minima: Analysis
  Based on Ergodicity of Functional SDEs
Adam-like Algorithm with Smooth Clipping Attains Global Minima: Analysis Based on Ergodicity of Functional SDEs
Keisuke Suzuki
13
0
0
29 Nov 2023
Riemannian stochastic optimization methods avoid strict saddle points
Riemannian stochastic optimization methods avoid strict saddle points
Ya-Ping Hsieh
Mohammad Reza Karimi
Andreas Krause
P. Mertikopoulos
28
5
0
04 Nov 2023
Tackling the Curse of Dimensionality with Physics-Informed Neural
  Networks
Tackling the Curse of Dimensionality with Physics-Informed Neural Networks
Zheyuan Hu
K. Shukla
George Karniadakis
Kenji Kawaguchi
PINN
AI4CE
63
85
0
23 Jul 2023
Convergence of stochastic gradient descent under a local Lojasiewicz
  condition for deep neural networks
Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
Jing An
Jianfeng Lu
16
4
0
18 Apr 2023
High-dimensional scaling limits and fluctuations of online least-squares
  SGD with smooth covariance
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
28
5
0
03 Apr 2023
Type-II Saddles and Probabilistic Stability of Stochastic Gradient
  Descent
Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent
Liu Ziyin
Botao Li
Tomer Galanti
Masakuni Ueda
37
7
0
23 Mar 2023
On the existence of optimal shallow feedforward networks with ReLU
  activation
On the existence of optimal shallow feedforward networks with ReLU activation
Steffen Dereich
Sebastian Kassing
19
4
0
06 Mar 2023
On the existence of minimizers in shallow residual ReLU neural network
  optimization landscapes
On the existence of minimizers in shallow residual ReLU neural network optimization landscapes
Steffen Dereich
Arnulf Jentzen
Sebastian Kassing
21
6
0
28 Feb 2023
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
23
2
0
20 Feb 2023
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the
  Bounded Gradient Assumption
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
9
1
0
15 Feb 2023
FedRC: Tackling Diverse Distribution Shifts Challenge in Federated
  Learning by Robust Clustering
FedRC: Tackling Diverse Distribution Shifts Challenge in Federated Learning by Robust Clustering
Yongxin Guo
Xiaoying Tang
Tao R. Lin
OOD
FedML
30
8
0
29 Jan 2023
Variance Reduction for Score Functions Using Optimal Baselines
Variance Reduction for Score Functions Using Optimal Baselines
Ronan L. Keane
H. Gao
11
0
0
27 Dec 2022
Efficiency Ordering of Stochastic Gradient Descent
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
28
6
0
15 Sep 2022
Convergence of Batch Updating Methods with Approximate Gradients and/or
  Noisy Measurements: Theory and Computational Results
Convergence of Batch Updating Methods with Approximate Gradients and/or Noisy Measurements: Theory and Computational Results
Tadipatri Uday
M. Vidyasagar
6
0
0
12 Sep 2022
Neural Tangent Kernel: A Survey
Neural Tangent Kernel: A Survey
Eugene Golikov
Eduard Pokonechnyy
Vladimir Korviakov
13
12
0
29 Aug 2022
Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased
  Full Set Gradient Approximation
Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation
Jeffrey Willette
Seanie Lee
Bruno Andreis
Kenji Kawaguchi
Juho Lee
S. Hwang
13
3
0
26 Aug 2022
A unified stochastic approximation framework for learning in games
A unified stochastic approximation framework for learning in games
P. Mertikopoulos
Ya-Ping Hsieh
V. Cevher
26
18
0
08 Jun 2022
A Unified Convergence Theorem for Stochastic Optimization Methods
A Unified Convergence Theorem for Stochastic Optimization Methods
Xiao Li
Andre Milzarek
28
11
0
08 Jun 2022
Metrizing Fairness
Metrizing Fairness
Yves Rychener
Bahar Taşkesen
Daniel Kuhn
FaML
36
4
0
30 May 2022
Uniform Generalization Bound on Time and Inverse Temperature for
  Gradient Descent Algorithm and its Application to Analysis of Simulated
  Annealing
Uniform Generalization Bound on Time and Inverse Temperature for Gradient Descent Algorithm and its Application to Analysis of Simulated Annealing
Keisuke Suzuki
AI4CE
19
0
0
25 May 2022
Weak Convergence of Approximate reflection coupling and its Application
  to Non-convex Optimization
Weak Convergence of Approximate reflection coupling and its Application to Non-convex Optimization
Keisuke Suzuki
9
5
0
24 May 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in
  Non-Convex Optimization With Non-isolated Local Minima
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
20
2
0
21 Mar 2022
Monte Carlo PINNs: deep learning approach for forward and inverse
  problems involving high dimensional fractional partial differential equations
Monte Carlo PINNs: deep learning approach for forward and inverse problems involving high dimensional fractional partial differential equations
Ling Guo
Hao Wu
Xiao-Jun Yu
Tao Zhou
PINN
AI4CE
16
58
0
16 Mar 2022
On Almost Sure Convergence Rates of Stochastic Gradient Methods
On Almost Sure Convergence Rates of Stochastic Gradient Methods
Jun Liu
Ye Yuan
6
36
0
09 Feb 2022
A subsampling approach for Bayesian model selection
A subsampling approach for Bayesian model selection
Jon Lachmann
G. Storvik
F. Frommlet
Aliaksadr Hubin
BDL
19
2
0
31 Jan 2022
On Uniform Boundedness Properties of SGD and its Momentum Variants
On Uniform Boundedness Properties of SGD and its Momentum Variants
Xiaoyu Wang
M. Johansson
18
3
0
25 Jan 2022
3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for
  Networked Multi-Agent Systems
3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for Networked Multi-Agent Systems
Adrian Redder
Arunselvan Ramaswamy
Holger Karl
OffRL
11
2
0
03 Jan 2022
Non-Asymptotic Analysis of Online Multiplicative Stochastic Gradient
  Descent
Non-Asymptotic Analysis of Online Multiplicative Stochastic Gradient Descent
Riddhiman Bhattacharya
Tiefeng Jiang
8
0
0
14 Dec 2021
Stationary Behavior of Constant Stepsize SGD Type Algorithms: An
  Asymptotic Characterization
Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization
Zaiwei Chen
Shancong Mou
S. T. Maguluri
17
13
0
11 Nov 2021
Inertial Newton Algorithms Avoiding Strict Saddle Points
Inertial Newton Algorithms Avoiding Strict Saddle Points
Camille Castera
ODL
11
2
0
08 Nov 2021
Adaptation of the Independent Metropolis-Hastings Sampler with
  Normalizing Flow Proposals
Adaptation of the Independent Metropolis-Hastings Sampler with Normalizing Flow Proposals
James A. Brofos
Marylou Gabrié
Marcus A. Brubaker
Roy R. Lederman
17
8
0
25 Oct 2021
Accelerated Almost-Sure Convergence Rates for Nonconvex Stochastic
  Gradient Descent using Stochastic Learning Rates
Accelerated Almost-Sure Convergence Rates for Nonconvex Stochastic Gradient Descent using Stochastic Learning Rates
Theodoros Mamalis
D. Stipanović
R. Tao
16
2
0
25 Oct 2021
Beyond Exact Gradients: Convergence of Stochastic Soft-Max Policy
  Gradient Methods with Entropy Regularization
Beyond Exact Gradients: Convergence of Stochastic Soft-Max Policy Gradient Methods with Entropy Regularization
Yuhao Ding
Junzi Zhang
Hyunin Lee
Javad Lavaei
30
18
0
19 Oct 2021
Global Convergence and Stability of Stochastic Gradient Descent
Global Convergence and Stability of Stochastic Gradient Descent
V. Patel
Shushu Zhang
Bowen Tian
23
22
0
04 Oct 2021
Stochastic Subgradient Descent on a Generic Definable Function Converges to a Minimizer
S. Schechtman
22
1
0
06 Sep 2021
Convergence of gradient descent for learning linear neural networks
Convergence of gradient descent for learning linear neural networks
Gabin Maxime Nguegnang
Holger Rauhut
Ulrich Terstiege
MLT
20
16
0
04 Aug 2021
SGD with a Constant Large Learning Rate Can Converge to Local Maxima
SGD with a Constant Large Learning Rate Can Converge to Local Maxima
Liu Ziyin
Botao Li
James B. Simon
Masakuni Ueda
13
8
0
25 Jul 2021
Strategic Instrumental Variable Regression: Recovering Causal
  Relationships From Strategic Responses
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
Keegan Harris
Daniel Ngo
Logan Stapleton
Hoda Heidari
Zhiwei Steven Wu
14
31
0
12 Jul 2021
12
Next