ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.04507
  4. Cited By
Escaping Saddle Points Efficiently with Occupation-Time-Adapted
  Perturbations

Escaping Saddle Points Efficiently with Occupation-Time-Adapted Perturbations

9 May 2020
Xin Guo
Jiequn Han
Mahan Tajrobehkar
Wenpin Tang
ArXivPDFHTML

Papers citing "Escaping Saddle Points Efficiently with Occupation-Time-Adapted Perturbations"

3 / 3 papers shown
Title
SPGD: Steepest Perturbed Gradient Descent Optimization
SPGD: Steepest Perturbed Gradient Descent Optimization
Amir M. Vahedi
Horea T. Ilies
25
0
0
07 Nov 2024
Accelerating Distributed Stochastic Optimization via Self-Repellent
  Random Walks
Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks
Jie Hu
Vishwaraj Doshi
Do Young Eun
34
2
0
18 Jan 2024
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
175
1,184
0
30 Nov 2014
1