Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04636
Cited By
v1
v2
v3
v4 (latest)
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"
50 / 588 papers shown
Title
Taming Nonconvex Stochastic Mirror Descent with General Bregman Divergence
Ilyas Fatkhullin
Niao He
68
4
0
27 Feb 2024
Investigating Deep Watermark Security: An Adversarial Transferability Perspective
Biqing Qi
Junqi Gao
Yiang Luo
Jianxing Liu
Ligang Wu
Bowen Zhou
AAML
54
3
0
26 Feb 2024
A Lower Bound for Estimating Fréchet Means
Shayan Hundrieser
B. Eltzner
S. Huckemann
44
2
0
19 Feb 2024
How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization
Andrew Lowy
Jonathan R. Ullman
Stephen J. Wright
95
8
0
17 Feb 2024
An Accelerated Distributed Stochastic Gradient Method with Momentum
Kun-Yen Huang
Shi Pu
Angelia Nedić
81
10
0
15 Feb 2024
Differentially Private Zeroth-Order Methods for Scalable Large Language Model Finetuning
Zhicheng Liu
Jian Lou
Wenxuan Bao
Yihan Hu
Baochun Li
Zhan Qin
K. Ren
120
10
0
12 Feb 2024
Towards Quantifying the Preconditioning Effect of Adam
Rudrajit Das
Naman Agarwal
Sujay Sanghavi
Inderjit S. Dhillon
30
7
0
11 Feb 2024
Federated Learning Can Find Friends That Are Advantageous
N. Tupitsa
Samuel Horváth
Martin Takávc
Eduard A. Gorbunov
FedML
97
2
0
07 Feb 2024
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
Arnulf Jentzen
Adrian Riekert
61
4
0
07 Feb 2024
Optimal sampling for stochastic and natural gradient descent
Robert Gruhlke
A. Nouy
Philipp Trunschke
57
3
0
05 Feb 2024
Non-asymptotic Analysis of Biased Adaptive Stochastic Approximation
Sobihan Surendran
Antoine Godichon-Baggioni
Adeline Fermanian
Sylvain Le Corff
100
2
0
05 Feb 2024
Careful with that Scalpel: Improving Gradient Surgery with an EMA
Yu-Guan Hsieh
James Thornton
Eugène Ndiaye
Michal Klein
Marco Cuturi
Pierre Ablin
MedIm
77
0
0
05 Feb 2024
On the Complexity of Finite-Sum Smooth Optimization under the Polyak-Łojasiewicz Condition
Yunyan Bai
Yuxing Liu
Luo Luo
57
0
0
04 Feb 2024
Challenges in Training PINNs: A Loss Landscape Perspective
Pratik Rathore
Weimu Lei
Zachary Frangella
Lu Lu
Madeleine Udell
AI4CE
PINN
ODL
105
53
0
02 Feb 2024
Monotone, Bi-Lipschitz, and Polyak-Lojasiewicz Networks
Ruigang Wang
Krishnamurthy Dvijotham
I. Manchester
103
5
0
02 Feb 2024
Diffusion Stochastic Optimization for Min-Max Problems
H. Cai
Sulaiman A. Alghunaim
Ali H. Sayed
69
2
0
26 Jan 2024
Continuous-time Riemannian SGD and SVRG Flows on Wasserstein Probabilistic Space
Mingyang Yi
Bohan Wang
56
0
0
24 Jan 2024
Efficient Learning in Polyhedral Games via Best Response Oracles
Darshan Chakrabarti
Gabriele Farina
Christian Kroer
62
4
0
06 Dec 2023
Convergence Rates for Stochastic Approximation: Biased Noise with Unbounded Variance, and Applications
Rajeeva Laxman Karandikar
M. Vidyasagar
46
10
0
05 Dec 2023
A New Random Reshuffling Method for Nonsmooth Nonconvex Finite-sum Optimization
Junwen Qiu
Xiao Li
Andre Milzarek
93
3
0
02 Dec 2023
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach
Kai Li
Jingjing Zheng
Xinnan Yuan
W. Ni
Ozgur B. Akan
H. Vincent Poor
AAML
80
16
0
30 Nov 2023
Critical Influence of Overparameterization on Sharpness-aware Minimization
Sungbin Shin
Dongyeop Lee
Maksym Andriushchenko
Namhoon Lee
AAML
156
2
0
29 Nov 2023
Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
Xinwei Zhang
Zhiqi Bu
Zhiwei Steven Wu
Mingyi Hong
52
7
0
24 Nov 2023
Locally Optimal Descent for Dynamic Stepsize Scheduling
Gilad Yehudai
Alon Cohen
Amit Daniely
Yoel Drori
Tomer Koren
Mariano Schain
84
0
0
23 Nov 2023
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
Michael Menart
Enayat Ullah
Raman Arora
Raef Bassily
Cristóbal Guzmán
86
2
0
22 Nov 2023
Non-Uniform Smoothness for Gradient Descent
A. Berahas
Lindon Roberts
Fred Roosta
93
4
0
15 Nov 2023
A Large Deviations Perspective on Policy Gradient Algorithms
Wouter Jongeneel
Daniel Kuhn
Mengmeng Li
55
1
0
13 Nov 2023
Adaptive Mirror Descent Bilevel Optimization
Feihu Huang
107
1
0
08 Nov 2023
Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization
Wei Shen
Minhui Huang
Jiawei Zhang
Cong Shen
FedML
94
2
0
02 Nov 2023
AdaSub: Stochastic Optimization Using Second-Order Information in Low-Dimensional Subspaces
João Victor Galvão da Mata
Martin S. Andersen
31
1
0
30 Oct 2023
Controlled Decoding from Language Models
Sidharth Mudgal
Jong Lee
H. Ganapathy
Yaguang Li
Tao Wang
...
Michael Collins
Trevor Strohman
Jilin Chen
Alex Beutel
Ahmad Beirami
112
91
0
25 Oct 2023
DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency for Federated Learning with Static and Streaming Dataset
Weijie Liu
Xiaoxi Zhang
Jingpu Duan
Carlee Joe-Wong
Zhi Zhou
Xu Chen
73
9
0
20 Oct 2023
A connection between Tempering and Entropic Mirror Descent
Nicolas Chopin
F. R. Crucinio
Anna Korba
49
14
0
18 Oct 2023
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang
Bingcong Li
K. K. Thekumparampil
Sewoong Oh
Niao He
92
15
0
14 Oct 2023
Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity
Youssef Allouah
R. Guerraoui
Nirupam Gupta
Rafael Pinot
Geovani Rizk
OOD
80
18
0
24 Sep 2023
Distributionally Time-Varying Online Stochastic Optimization under Polyak-Łojasiewicz Condition with Application in Conditional Value-at-Risk Statistical Learning
Yuen-Man Pun
Farhad Farokhi
Iman Shames
17
2
0
18 Sep 2023
On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation
Jeongyeol Kwon
Dohyun Kwon
Steve Wright
Robert D. Nowak
93
30
0
04 Sep 2023
A Unified Analysis for the Subgradient Methods Minimizing Composite Nonconvex, Nonsmooth and Non-Lipschitz Functions
Daoli Zhu
Lei Zhao
Shuzhong Zhang
101
2
0
30 Aug 2023
Non-ergodic linear convergence property of the delayed gradient descent under the strongly convexity and the Polyak-Łojasiewicz condition
Hyunggwon Choi
Woocheol Choi
Jinmyoung Seok
44
0
0
23 Aug 2023
A Homogenization Approach for Gradient-Dominated Stochastic Optimization
Jiyuan Tan
Chenyu Xue
Chuwen Zhang
Qi Deng
Dongdong Ge
Yinyu Ye
50
2
0
21 Aug 2023
Variance reduction techniques for stochastic proximal point algorithms
Cheik Traoré
Vassilis Apidopoulos
Saverio Salzo
S. Villa
64
5
0
18 Aug 2023
Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods
A. Ma
Yangchen Pan
Amir-massoud Farahmand
AAML
61
7
0
13 Aug 2023
Faster Stochastic Algorithms for Minimax Optimization under Polyak--Łojasiewicz Conditions
Le‐Yu Chen
Boyuan Yao
Luo Luo
64
15
0
29 Jul 2023
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case
Meixuan He
Yuqing Liang
Jinlan Liu
Dongpo Xu
74
9
0
20 Jul 2023
Zero-th Order Algorithm for Softmax Attention Optimization
Yichuan Deng
Zhihang Li
Sridhar Mahadevan
Zhao Song
62
14
0
17 Jul 2023
Performance of
ℓ
1
\ell_1
ℓ
1
Regularization for Sparse Convex Optimization
Kyriakos Axiotis
T. Yasuda
58
0
0
14 Jul 2023
Invex Programs: First Order Algorithms and Their Convergence
Adarsh Barik
S. Sra
Jean Honorio
58
2
0
10 Jul 2023
Fairness-aware Federated Minimax Optimization with Convergence Guarantee
Gerry Windiarto Mohamad Dunda
Shenghui Song
FedML
56
2
0
10 Jul 2023
Accelerated Optimization Landscape of Linear-Quadratic Regulator
Le Feng
Yuan‐Hua Ni
63
0
0
07 Jul 2023
Analyzing and Improving Greedy 2-Coordinate Updates for Equality-Constrained Optimization via Steepest Descent in the 1-Norm
A. Ramesh
Aaron Mishkin
Mark Schmidt
Yihan Zhou
J. Lavington
Jennifer She
62
1
0
03 Jul 2023
Previous
1
2
3
4
5
6
...
10
11
12
Next