ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.13093
  4. Cited By
Type-II Saddles and Probabilistic Stability of Stochastic Gradient
  Descent

Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent

23 March 2023
Liu Ziyin
Botao Li
Tomer Galanti
Masakuni Ueda
ArXivPDFHTML

Papers citing "Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent"

9 / 9 papers shown
Title
Dynamical stability and chaos in artificial neural network trajectories
  along training
Dynamical stability and chaos in artificial neural network trajectories along training
Kaloyan Danovski
Miguel C. Soriano
Lucas Lacasa
23
6
0
08 Apr 2024
A Precise Characterization of SGD Stability Using Loss Surface Geometry
A Precise Characterization of SGD Stability Using Loss Surface Geometry
Gregory Dexter
Borja Ocejo
S. Keerthi
Aman Gupta
Ayan Acharya
Rajiv Khanna
MLT
15
0
0
22 Jan 2024
Symmetry Induces Structure and Constraint of Learning
Symmetry Induces Structure and Constraint of Learning
Liu Ziyin
16
10
0
29 Sep 2023
Law of Balance and Stationary Distribution of Stochastic Gradient
  Descent
Law of Balance and Stationary Distribution of Stochastic Gradient Descent
Liu Ziyin
Hongchao Li
Masakuni Ueda
15
9
0
13 Aug 2023
Exact Mean Square Linear Stability Analysis for SGD
Exact Mean Square Linear Stability Analysis for SGD
Rotem Mulayoff
T. Michaeli
MLT
11
1
0
13 Jun 2023
SGD learning on neural networks: leap complexity and saddle-to-saddle
  dynamics
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Emmanuel Abbe
Enric Boix-Adserà
Theodor Misiakiewicz
FedML
MLT
79
72
0
21 Feb 2023
spred: Solving $L_1$ Penalty with SGD
spred: Solving L1L_1L1​ Penalty with SGD
Liu Ziyin
Zihao W. Wang
38
14
0
03 Oct 2022
What shapes the loss landscape of self-supervised learning?
What shapes the loss landscape of self-supervised learning?
Liu Ziyin
Ekdeep Singh Lubana
Masakuni Ueda
Hidenori Tanaka
48
20
0
02 Oct 2022
Neural Network Weights Do Not Converge to Stationary Points: An
  Invariant Measure Perspective
Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure Perspective
J. Zhang
Haochuan Li
S. Sra
Ali Jadbabaie
66
9
0
12 Oct 2021
1