Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04636
Cited By
v1
v2
v3
v4 (latest)
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"
50 / 588 papers shown
Title
Stability and Generalization of Stochastic Gradient Methods for Minimax Problems
Yunwen Lei
Zhenhuan Yang
Tianbao Yang
Yiming Ying
74
48
0
08 May 2021
Towards Sharper Utility Bounds for Differentially Private Pairwise Learning
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
62
2
0
07 May 2021
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis
Stephan Wojtowytsch
69
52
0
04 May 2021
Convergence Analysis and System Design for Federated Learning over Wireless Networks
Shuo Wan
Jiaxun Lu
Pingyi Fan
Yunfeng Shao
Chenghui Peng
Khaled B. Letaief
82
55
0
30 Apr 2021
Decentralized Federated Averaging
Tao Sun
Dongsheng Li
Bao Wang
FedML
88
217
0
23 Apr 2021
A Theoretical Analysis of Learning with Noisily Labeled Data
Yi Tian Xu
Qi Qian
Hao Li
Rong Jin
NoLa
31
1
0
08 Apr 2021
Why Do Local Methods Solve Nonconvex Problems?
Tengyu Ma
49
13
0
24 Mar 2021
Stability and Deviation Optimal Risk Bounds with Convergence Rate
O
(
1
/
n
)
O(1/n)
O
(
1/
n
)
Yegor Klochkov
Nikita Zhivotovskiy
81
62
0
22 Mar 2021
Algorithmic Challenges in Ensuring Fairness at the Time of Decision
Jad Salem
Swati Gupta
Vijay Kamble
FaML
57
4
0
16 Mar 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
107
61
0
25 Feb 2021
Distributionally Robust Federated Averaging
Yuyang Deng
Mohammad Mahdi Kamani
M. Mahdavi
FedML
62
143
0
25 Feb 2021
Provable Super-Convergence with a Large Cyclical Learning Rate
Samet Oymak
62
12
0
22 Feb 2021
Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Steffen Dereich
Sebastian Kassing
108
27
0
16 Feb 2021
Fast and accurate optimization on the orthogonal manifold without retraction
Pierre Ablin
Gabriel Peyré
115
30
0
15 Feb 2021
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
Kenji Kawaguchi
PINN
64
42
0
15 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
112
160
0
14 Feb 2021
Stochastic Gradient Langevin Dynamics with Variance Reduction
Zhishen Huang
Stephen Becker
77
7
0
12 Feb 2021
Proximal and Federated Random Reshuffling
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
FedML
77
32
0
12 Feb 2021
Proximal Gradient Descent-Ascent: Variable Convergence under KŁ Geometry
Ziyi Chen
Yi Zhou
Tengyu Xu
Yingbin Liang
115
35
0
09 Feb 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
145
46
0
04 Feb 2021
Stability and Generalization of the Decentralized Stochastic Gradient Descent
Tao Sun
Dongsheng Li
Bao Wang
16
0
0
02 Feb 2021
On the Local Linear Rate of Consensus on the Stiefel Manifold
Shixiang Chen
Alfredo García
Mingyi Hong
Shahin Shahrampour
61
14
0
22 Jan 2021
Can stable and accurate neural networks be computed? -- On the barriers of deep learning and Smale's 18th problem
Matthew J. Colbrook
Vegard Antun
A. Hansen
114
136
0
20 Jan 2021
Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent
Junyuan Hong
Zhangyang Wang
Jiayu Zhou
49
9
0
19 Jan 2021
Learning with Gradient Descent and Weakly Convex Losses
Dominic Richards
Michael G. Rabbat
MLT
71
15
0
13 Jan 2021
CADA: Communication-Adaptive Distributed Adam
Tianyi Chen
Ziye Guo
Yuejiao Sun
W. Yin
ODL
39
24
0
31 Dec 2020
Variance Reduction on General Adaptive Stochastic Mirror Descent
Wenjie Li
Zhanyu Wang
Yichen Zhang
Guang Cheng
61
4
0
26 Dec 2020
Noisy Linear Convergence of Stochastic Gradient Descent for CV@R Statistical Learning under Polyak-Łojasiewicz Conditions
Dionysios S. Kalogerias
76
8
0
14 Dec 2020
Learning over no-Preferred and Preferred Sequence of items for Robust Recommendation
Aleksandra Burashnikova
Marianne Clausel
Charlotte Laclau
Frack Iutzeller
Yury Maximov
Massih-Reza Amini
29
4
0
12 Dec 2020
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
129
79
0
11 Dec 2020
A Study of Condition Numbers for First-Order Optimization
Charles Guille-Escuret
Baptiste Goujaud
M. Girotti
Ioannis Mitliagkas
80
20
0
10 Dec 2020
Characterization of Excess Risk for Locally Strongly Convex Population Risk
Mingyang Yi
Ruoyu Wang
Zhi-Ming Ma
31
2
0
04 Dec 2020
Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with Lazy Clients
Jun Li
Yumeng Shao
Ming Ding
Chuan Ma
Kang Wei
Zhu Han
H. Vincent Poor
46
9
0
02 Dec 2020
Geom-SPIDER-EM: Faster Variance Reduced Stochastic Expectation Maximization for Nonconvex Finite-Sum Optimization
G. Fort
Eric Moulines
Hoi-To Wai
TPM
44
6
0
24 Nov 2020
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
Abolfazl Hashemi
Anish Acharya
Rudrajit Das
H. Vikalo
Sujay Sanghavi
Inderjit Dhillon
72
9
0
20 Nov 2020
Convergence Analysis of Homotopy-SGD for non-convex optimization
Matilde Gargiani
Andrea Zanelli
Quoc Tran-Dinh
Moritz Diehl
Frank Hutter
44
3
0
20 Nov 2020
Towards Optimal Problem Dependent Generalization Error Bounds in Statistical Learning Theory
Yunbei Xu
A. Zeevi
116
17
0
12 Nov 2020
A fast randomized incremental gradient method for decentralized non-convex optimization
Ran Xin
U. Khan
S. Kar
70
33
0
07 Nov 2020
LQR with Tracking: A Zeroth-order Approach and Its Global Convergence
Tongzheng Ren
Aoxiao Zhong
Na Li
42
3
0
03 Nov 2020
Efficient constrained sampling via the mirror-Langevin algorithm
Kwangjun Ahn
Sinho Chewi
104
57
0
30 Oct 2020
Finite-Time Convergence Rates of Decentralized Stochastic Approximation with Applications in Multi-Agent and Multi-Task Learning
Sihan Zeng
Thinh T. Doan
Justin Romberg
68
13
0
28 Oct 2020
Optimal Client Sampling for Federated Learning
Jiajun He
Samuel Horváth
Peter Richtárik
FedML
89
201
0
26 Oct 2020
Decentralized Deep Learning using Momentum-Accelerated Consensus
Aditya Balu
Zhanhong Jiang
Sin Yong Tan
Chinmay Hedge
Young M. Lee
Soumik Sarkar
FedML
91
22
0
21 Oct 2020
Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer
Chen Zhu
Zheng Xu
Ali Shafahi
Manli Shu
Amin Ghiasi
Tom Goldstein
MQ
53
3
0
14 Oct 2020
AEGD: Adaptive Gradient Descent with Energy
Hailiang Liu
Xuping Tian
ODL
55
11
0
10 Oct 2020
WeMix: How to Better Utilize Data Augmentation
Yi Tian Xu
Asaf Noy
Ming Lin
Qi Qian
Hao Li
Rong Jin
82
16
0
03 Oct 2020
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark Schmidt
Francis R. Bach
Peter Richtárik
100
117
0
02 Oct 2020
A variable metric mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize
Tengteng Yu
Xinwei Liu
Yuhong Dai
Jie Sun
84
4
0
02 Oct 2020
Linear Convergence of Generalized Mirror Descent with Time-Dependent Mirrors
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
39
9
0
18 Sep 2020
Finite-Sample Guarantees for Wasserstein Distributionally Robust Optimization: Breaking the Curse of Dimensionality
Rui Gao
72
94
0
09 Sep 2020
Previous
1
2
3
...
10
11
12
7
8
9
Next