Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04636
Cited By
v1
v2
v3
v4 (latest)
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"
50 / 588 papers shown
Title
A Theoretical Analysis of the Learning Dynamics under Class Imbalance
Emanuele Francazi
Marco Baity-Jesi
Aurelien Lucchi
81
18
0
01 Jul 2022
On the sample complexity of entropic optimal transport
Philippe Rigollet
Austin J. Stromme
OT
82
43
0
27 Jun 2022
Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out
Jun-Kun Wang
Chi-Heng Lin
Andre Wibisono
Bin Hu
84
22
0
22 Jun 2022
Frank-Wolfe-based Algorithms for Approximating Tyler's M-estimator
L. Danon
Dan Garber
51
4
0
19 Jun 2022
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
121
75
0
14 Jun 2022
A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum Optimization
Dounia Lakhmiri
D. Orban
Andrea Lodi
27
0
0
14 Jun 2022
Towards Understanding Sharpness-Aware Minimization
Maksym Andriushchenko
Nicolas Flammarion
AAML
106
142
0
13 Jun 2022
Anchor Sampling for Federated Learning with Partial Client Participation
Feijie Wu
Song Guo
Zhihao Qu
Shiqi He
Ziming Liu
Jing Gao
FedML
76
14
0
13 Jun 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
63
2
0
13 Jun 2022
Theoretical Error Performance Analysis for Variational Quantum Circuit Based Functional Regression
Jun Qi
Chao-Han Huck Yang
Pin-Yu Chen
Min-hsiu Hsieh
96
54
0
08 Jun 2022
Rotation to Sparse Loadings using
L
p
L^p
L
p
Losses and Related Inference Problems
Xinyi Liu
Gabriel Wallin
Yunxiao Chen
I. Moustaki
38
3
0
05 Jun 2022
Federated Learning with a Sampling Algorithm under Isoperimetry
Lukang Sun
Adil Salim
Peter Richtárik
FedML
88
7
0
02 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
38
0
0
01 Jun 2022
Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games
Sihan Zeng
Thinh T. Doan
Justin Romberg
143
22
0
27 May 2022
Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD for Gradient-Dominated Function
Saeed Masiha
Saber Salehkaleybar
Niao He
Negar Kiyavash
Patrick Thiran
127
18
0
25 May 2022
Learning from time-dependent streaming data with online stochastic algorithms
Antoine Godichon-Baggioni
Nicklas Werge
Olivier Wintenberger
96
3
0
25 May 2022
Differentially private Riemannian optimization
Andi Han
Bamdev Mishra
Pratik Jawanpuria
Junbin Gao
64
10
0
19 May 2022
Policy Gradient Method For Robust Reinforcement Learning
Yue Wang
Shaofeng Zou
126
77
0
15 May 2022
A globally convergent fast iterative shrinkage-thresholding algorithm with a new momentum factor for single and multi-objective convex optimization
H. Tanabe
E. H. Fukuda
N. Yamashita
19
8
0
11 May 2022
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization
Laurent Condat
Kai Yi
Peter Richtárik
105
21
0
09 May 2022
Network Gradient Descent Algorithm for Decentralized Federated Learning
Shuyuan Wu
Danyang Huang
Hansheng Wang
FedML
64
11
0
06 May 2022
Implicit Regularization Properties of Variance Reduced Stochastic Mirror Descent
Yiling Luo
X. Huo
Y. Mei
44
1
0
29 Apr 2022
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD
Konstantinos E. Nikolakakis
Farzin Haddadpour
Amin Karbasi
Dionysios S. Kalogerias
128
19
0
26 Apr 2022
Riemannian Hamiltonian methods for min-max optimization on manifolds
Andi Han
Bamdev Mishra
Pratik Jawanpuria
Pawan Kumar
Junbin Gao
80
17
0
25 Apr 2022
Sharper Utility Bounds for Differentially Private Models
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
83
3
0
22 Apr 2022
A Fast and Convergent Proximal Algorithm for Regularized Nonconvex and Nonsmooth Bi-level Optimization
Ziyi Chen
B. Kailkhura
Yi Zhou
84
9
0
30 Mar 2022
Convergence of gradient descent for deep neural networks
S. Chatterjee
ODL
63
22
0
30 Mar 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
55
2
0
21 Mar 2022
Learning Distributionally Robust Models at Scale via Composite Optimization
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
Amin Karbasi
OOD
69
5
0
17 Mar 2022
Private Non-Convex Federated Learning Without a Trusted Server
Andrew Lowy
Ali Ghafelebashi
Meisam Razaviyayn
FedML
99
27
0
13 Mar 2022
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
110
49
0
09 Mar 2022
Noisy Low-rank Matrix Optimization: Geometry of Local Minima and Convergence Rate
Ziye Ma
Somayeh Sojoudi
65
7
0
08 Mar 2022
Whiplash Gradient Descent Dynamics
Subhransu S. Bhattacharjee
I. Petersen
24
0
0
04 Mar 2022
Provably Efficient Convergence of Primal-Dual Actor-Critic with Nonlinear Function Approximation
Jing Dong
Li Shen
Ying Xu
Baoxiang Wang
80
1
0
28 Feb 2022
Learning over No-Preferred and Preferred Sequence of Items for Robust Recommendation (Extended Abstract)
Aleksandra Burashnikova
Yury Maximov
Marianne Clausel
Charlotte Laclau
F. Iutzeler
Massih-Reza Amini
22
0
0
26 Feb 2022
From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality
Fusheng Liu
Haizhao Yang
Soufiane Hayou
Qianxiao Li
AI4CE
51
2
0
22 Feb 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
88
8
0
18 Feb 2022
Delay-adaptive step-sizes for asynchronous learning
Xuyang Wu
Sindri Magnússon
Hamid Reza Feyzmahdavian
M. Johansson
67
14
0
17 Feb 2022
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Wei Jiang
Bokun Wang
Yibo Wang
Lijun Zhang
Tianbao Yang
149
18
0
15 Feb 2022
Improved analysis for a proximal algorithm for sampling
Yongxin Chen
Sinho Chewi
Adil Salim
Andre Wibisono
100
58
0
13 Feb 2022
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo
Krishnakumar Balasubramanian
Sinho Chewi
Murat A. Erdogdu
Adil Salim
Matthew Shunshi Zhang
105
65
0
10 Feb 2022
Local Linear Convergence of Gradient Methods for Subspace Optimization via Strict Complementarity
Dan Garber
Ron Fisher
53
1
0
08 Feb 2022
Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution
Lam M. Nguyen
Trang H. Tran
Marten van Dijk
91
3
0
07 Feb 2022
PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method with Probabilistic Gradient Estimation
Matilde Gargiani
Andrea Zanelli
Andrea Martinelli
Tyler H. Summers
John Lygeros
71
14
0
01 Feb 2022
DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity
Chung-Yiu Yau
Hoi-To Wai
115
5
0
01 Feb 2022
Differentially Private SGDA for Minimax Problems
Zhenhuan Yang
Shu Hu
Yunwen Lei
Kush R. Varshney
Siwei Lyu
Yiming Ying
68
21
0
22 Jan 2022
On generalization bounds for deep networks based on loss surface implicit regularization
Masaaki Imaizumi
Johannes Schmidt-Hieber
ODL
68
3
0
12 Jan 2022
Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
ODL
30
0
0
30 Dec 2021
DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning
Ismat Jarin
Birhanu Eshete
AAML
64
10
0
24 Dec 2021
Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for Solving Nonconvex Min-Max Problems
Thinh T. Doan
90
16
0
17 Dec 2021
Previous
1
2
3
...
5
6
7
...
10
11
12
Next