ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13021
  4. Cited By
Efficiently avoiding saddle points with zero order methods: No gradients
  required

Efficiently avoiding saddle points with zero order methods: No gradients required

29 October 2019
Lampros Flokas
Emmanouil-Vasileios Vlatakis-Gkaragkounis
Georgios Piliouras
ArXiv (abs)PDFHTML

Papers citing "Efficiently avoiding saddle points with zero order methods: No gradients required"

12 / 12 papers shown
Title
How to Boost Any Loss Function
How to Boost Any Loss Function
Richard Nock
Yishay Mansour
62
0
0
02 Jul 2024
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the
  Bounded Gradient Assumption
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
67
1
0
15 Feb 2023
Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without
  Gradients
Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients
Hualin Zhang
Huan Xiong
Bin Gu
87
9
0
04 Oct 2022
Versatile Single-Loop Method for Gradient Estimator: First and Second
  Order Optimality, and its Application to Federated Learning
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
Kazusato Oko
Shunta Akiyama
Tomoya Murata
Taiji Suzuki
FedML
88
0
0
01 Sep 2022
Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement
  Learning with Actor Rectification
Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification
L. Pan
Longbo Huang
Tengyu Ma
Huazhe Xu
OffRLOnRL
117
55
0
22 Nov 2021
On the Second-order Convergence Properties of Random Search Methods
On the Second-order Convergence Properties of Random Search Methods
Aurelien Lucchi
Antonio Orvieto
Adamos Solomou
65
9
0
25 Oct 2021
Stochastic Gradient Langevin Dynamics with Variance Reduction
Stochastic Gradient Langevin Dynamics with Variance Reduction
Zhishen Huang
Stephen Becker
82
7
0
12 Feb 2021
On the Almost Sure Convergence of Stochastic Gradient Descent in
  Non-Convex Problems
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
P. Mertikopoulos
Nadav Hallak
Ali Kavis
Volkan Cevher
63
88
0
19 Jun 2020
Zeroth-Order Supervised Policy Improvement
Zeroth-Order Supervised Policy Improvement
Hao Sun
Ziping Xu
Yuhang Song
Meng Fang
Jiechao Xiong
Bo Dai
Bolei Zhou
OffRL
59
9
0
11 Jun 2020
A Primer on Zeroth-Order Optimization in Signal Processing and Machine
  Learning
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
Sijia Liu
Pin-Yu Chen
B. Kailkhura
Gaoyuan Zhang
A. Hero III
P. Varshney
82
235
0
11 Jun 2020
Escaping Saddle Points for Zeroth-order Nonconvex Optimization using
  Estimated Gradient Descent
Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent
Qinbo Bai
Mridul Agarwal
Vaneet Aggarwal
27
7
0
03 Oct 2019
Min-Max Optimization without Gradients: Convergence and Applications to
  Adversarial ML
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
Sijia Liu
Songtao Lu
Xiangyi Chen
Yao Feng
Kaidi Xu
Abdullah Al-Dujaili
Mingyi Hong
Una-May Obelilly
94
26
0
30 Sep 2019
1