Papers
Communities
Organizations
Events
Blog
Pricing
Feedback
Contact Sales
Search
Open menu
Home
Papers
1608.04636
Cited By
v1
v2
v3
v4 (latest)
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"
50 / 602 papers shown
Title
Communication-Censored Distributed Stochastic Gradient Descent
Weiyu Li
Tianyi Chen
Liping Li
Zhaoxian Wu
Qing Ling
64
19
0
09 Sep 2019
Stochastic AUC Maximization with Deep Neural Networks
Mingrui Liu
Zhuoning Yuan
Yiming Ying
Tianbao Yang
119
109
0
28 Aug 2019
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
Sepideh Hassan-Moghaddam
Mihailo R. Jovanović
42
0
0
23 Aug 2019
Towards Better Generalization: BP-SVRG in Training Deep Neural Networks
Hao Jin
Dachao Lin
Zhihua Zhang
ODL
67
2
0
18 Aug 2019
Path Length Bounds for Gradient Descent and Flow
Chirag Gupta
Sivaraman Balakrishnan
Aaditya Ramdas
152
15
0
02 Aug 2019
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift
Alekh Agarwal
Sham Kakade
Jason D. Lee
G. Mahajan
239
323
0
01 Aug 2019
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Lénaïc Chizat
120
98
0
24 Jul 2019
signADAM: Learning Confidences for Deep Neural Networks
Dong Wang
Yicheng Liu
Wenwo Tang
Fanhua Shang
Hongying Liu
Qigong Sun
Licheng Jiao
ODL
FedML
39
1
0
21 Jul 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
93
53
0
08 Jul 2019
The Role of Memory in Stochastic Optimization
Antonio Orvieto
Jonas Köhler
Aurelien Lucchi
104
32
0
02 Jul 2019
Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
Oliver Hinder
Aaron Sidford
N. Sohoni
139
76
0
27 Jun 2019
A Stochastic Composite Gradient Method with Incremental Variance Reduction
Junyu Zhang
Lin Xiao
88
69
0
24 Jun 2019
Tensor Canonical Correlation Analysis with Convergence and Statistical Guarantees
You-Lin Chen
Mladen Kolar
R. Tsay
141
0
0
12 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
137
40
0
09 Jun 2019
Last-iterate convergence rates for min-max optimization
Jacob D. Abernethy
Kevin A. Lai
Andre Wibisono
156
74
0
05 Jun 2019
Global Optimality Guarantees For Policy Gradient Methods
Jalaj Bhandari
Daniel Russo
200
203
0
05 Jun 2019
Sparse optimal control of networks with multiplicative noise via policy gradient
Benjamin J. Gravell
Yi Guo
Tyler H. Summers
33
3
0
28 May 2019
Learning robust control for LQR systems with multiplicative noise via policy gradient
Benjamin J. Gravell
Peyman Mohajerin Esfahani
Tyler H. Summers
123
26
0
28 May 2019
Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization
Yifan Hu
Xin Chen
Niao He
114
37
0
28 May 2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
Filip Hanzely
Peter Richtárik
140
27
0
27 May 2019
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
Sharan Vaswani
Aaron Mishkin
I. Laradji
Mark Schmidt
Gauthier Gidel
Simon Lacoste-Julien
ODL
196
213
0
24 May 2019
Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function
A. Raghunathan
A. Cherian
Devesh K. Jha
66
22
0
15 May 2019
On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization
Hao Yu
Rong Jin
104
51
0
10 May 2019
On Structured Filtering-Clustering: Global Error Bound and Optimal First-Order Algorithms
Nhat Ho
Tianyi Lin
Michael I. Jordan
154
3
0
16 Apr 2019
The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent
Karthik A. Sankararaman
Soham De
Zheng Xu
Wenjie Huang
Tom Goldstein
ODL
196
108
0
15 Apr 2019
Controlling Neural Networks via Energy Dissipation
Michael Möller
Thomas Möllenhoff
Zorah Lähner
119
17
0
05 Apr 2019
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
121
103
0
02 Apr 2019
Provable Guarantees for Gradient-Based Meta-Learning
M. Khodak
Maria-Florina Balcan
Ameet Talwalkar
FedML
177
152
0
27 Feb 2019
A Dictionary-Based Generalization of Robust PCA Part II: Applications to Hyperspectral Demixing
Sirisha Rambhatla
Xingguo Li
Jineng Ren
Jarvis Haupt
72
6
0
26 Feb 2019
Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods
Maher Nouiehed
Maziar Sanjabi
Tianjian Huang
Jason D. Lee
Meisam Razaviyayn
154
347
0
21 Feb 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
91
141
0
15 Feb 2019
An adaptive stochastic optimization algorithm for resource allocation
Xavier Fontaine
Shie Mannor
Vianney Perchet
70
13
0
12 Feb 2019
Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions
Adrien B. Taylor
Francis R. Bach
120
67
0
03 Feb 2019
Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions
Yunwen Lei
Ting Hu
Guiying Li
Shengcai Liu
MLT
144
120
0
03 Feb 2019
ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding
Hongyi Wang
Zachary B. Charles
Dimitris Papailiopoulos
66
55
0
28 Jan 2019
SGD: General Analysis and Improved Rates
Robert Mansel Gower
Nicolas Loizou
Xun Qian
Alibek Sailanbayev
Egor Shulgin
Peter Richtárik
149
400
0
27 Jan 2019
Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization
Zhenxun Zhuang
Ashok Cutkosky
Francesco Orabona
162
5
0
25 Jan 2019
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Samet Oymak
Mahdi Soltanolkotabi
ODL
152
180
0
25 Dec 2018
Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems
Dhruv Malik
A. Pananjady
Kush S. Bhatia
K. Khamaru
Peter L. Bartlett
Martin J. Wainwright
130
207
0
20 Dec 2018
Stagewise Training Accelerates Convergence of Testing Error Over SGD
Zhuoning Yuan
Yan Yan
Rong Jin
Tianbao Yang
137
11
0
10 Dec 2018
Solving Non-Convex Non-Concave Min-Max Games Under Polyak-Łojasiewicz Condition
Maziar Sanjabi
Meisam Razaviyayn
Jason D. Lee
85
35
0
07 Dec 2018
Inexact SARAH Algorithm for Stochastic Optimization
Lam M. Nguyen
K. Scheinberg
Martin Takáč
104
52
0
25 Nov 2018
On exponential convergence of SGD in non-convex over-parametrized learning
Xinhai Liu
M. Belkin
Yu-Shen Liu
130
105
0
06 Nov 2018
Uniform Convergence of Gradients for Non-Convex Learning and Optimization
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
123
72
0
25 Oct 2018
SpiderBoost and Momentum: Faster Stochastic Variance Reduction Algorithms
Zhe Wang
Kaiyi Ji
Yi Zhou
Yingbin Liang
Vahid Tarokh
ODL
107
82
0
25 Oct 2018
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark Schmidt
185
305
0
16 Oct 2018
Efficient Greedy Coordinate Descent for Composite Problems
Sai Praneeth Karimireddy
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
77
30
0
16 Oct 2018
Continuous-time Models for Stochastic Optimization Algorithms
Antonio Orvieto
Aurelien Lucchi
139
32
0
05 Oct 2018
Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem Solver
Fred Roosta
Yang Liu
Peng Xu
Michael W. Mahoney
139
16
0
30 Sep 2018
Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks
Ohad Shamir
129
47
0
23 Sep 2018
Previous
1
2
3
...
10
11
12
13
Next