Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1608.04636
Cited By
v1
v2
v3
v4 (latest)
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"
38 / 588 papers shown
Title
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Di Wang
Minwei Ye
Jinhui Xu
130
273
0
14 Feb 2018
Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity
Dan Garber
78
6
0
13 Feb 2018
Fast Global Convergence via Landscape of Empirical Loss
Chao Qu
Yan Li
Huan Xu
19
0
0
13 Feb 2018
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
97
116
0
13 Feb 2018
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein
Yu Wang
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
ODL
118
1,050
0
13 Feb 2018
On the Proximal Gradient Algorithm with Alternated Inertia
F. Iutzeler
J. Malick
36
33
0
17 Jan 2018
Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator
Maryam Fazel
Rong Ge
Sham Kakade
M. Mesbahi
102
611
0
15 Jan 2018
A Stochastic Trust Region Algorithm Based on Careful Step Normalization
Frank E. Curtis
K. Scheinberg
R. Shi
66
45
0
29 Dec 2017
Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers
Yifan Chen
Yuejiao Sun
W. Yin
38
5
0
22 Nov 2017
Riemannian Optimization via Frank-Wolfe Methods
Melanie Weber
S. Sra
69
33
0
30 Oct 2017
Stability and Generalization of Learning Algorithms that Converge to Global Optima
Zachary B. Charles
Dimitris Papailiopoulos
MLT
57
163
0
23 Oct 2017
Characterization of Gradient Dominance and Regularity Conditions for Neural Networks
Yi Zhou
Yingbin Liang
78
33
0
18 Oct 2017
A Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism, Composite Objectives, and Variational Bounds
Pooria Joulani
András Gyorgy
Csaba Szepesvári
55
42
0
08 Sep 2017
Nonconvex Sparse Logistic Regression with Weakly Convex Regularization
Xinyue Shen
Yuantao Gu
122
31
0
07 Aug 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
121
35
0
25 Jun 2017
Gradient Diversity: a Key Ingredient for Scalable Distributed Learning
Dong Yin
A. Pananjady
Max Lam
Dimitris Papailiopoulos
Kannan Ramchandran
Peter L. Bartlett
77
11
0
18 Jun 2017
YellowFin and the Art of Momentum Tuning
Jian Zhang
Ioannis Mitliagkas
ODL
94
108
0
12 Jun 2017
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients
Lukas Balles
Philipp Hennig
100
169
0
22 May 2017
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
Qunwei Li
Yi Zhou
Yingbin Liang
P. Varshney
127
94
0
14 May 2017
Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization
Feihu Huang
Songcan Chen
42
2
0
26 Apr 2017
Faster Subgradient Methods for Functions with Hölderian Growth
Patrick R. Johnstone
P. Moulin
57
35
0
01 Apr 2017
Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry
Guillaume Garrigos
Lorenzo Rosasco
S. Villa
92
42
0
28 Mar 2017
Online Learning Rate Adaptation with Hypergradient Descent
A. G. Baydin
R. Cornish
David Martínez-Rubio
Mark Schmidt
Frank Wood
ODL
92
250
0
14 Mar 2017
Learn-and-Adapt Stochastic Dual Gradients for Network Resource Allocation
Tianyi Chen
Qing Ling
G. Giannakis
69
20
0
05 Mar 2017
How to Escape Saddle Points Efficiently
Chi Jin
Rong Ge
Praneeth Netrapalli
Sham Kakade
Michael I. Jordan
ODL
237
838
0
02 Mar 2017
SAGA and Restricted Strong Convexity
Chao Qu
Yan Li
Huan Xu
41
5
0
19 Feb 2017
Linear convergence of SDCA in statistical estimation
Chao Qu
Huan Xu
75
8
0
26 Jan 2017
Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization
Xingguo Li
Junwei Lu
R. Arora
Jarvis Haupt
Han Liu
Zhaoran Wang
T. Zhao
92
53
0
29 Dec 2016
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption
Jie Liu
Martin Takáč
ODL
117
4
0
16 Dec 2016
The Physical Systems Behind Optimization Algorithms
Lin F. Yang
R. Arora
Vladimir Braverman
T. Zhao
AI4CE
70
19
0
08 Dec 2016
Adaptive Accelerated Gradient Converging Methods under Holderian Error Bound Condition
Mingrui Liu
Tianbao Yang
91
15
0
23 Nov 2016
Identity Matters in Deep Learning
Moritz Hardt
Tengyu Ma
OOD
101
399
0
14 Nov 2016
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
Virginia Smith
Simone Forte
Chenxin Ma
Martin Takáč
Michael I. Jordan
Martin Jaggi
103
273
0
07 Nov 2016
Linear Convergence of SVRG in Statistical Estimation
Chao Qu
Yan Li
Huan Xu
61
11
0
07 Nov 2016
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De
A. Yadav
David Jacobs
Tom Goldstein
ODL
177
62
0
18 Oct 2016
Accelerating Stochastic Composition Optimization
Mengdi Wang
Ji Liu
Ethan X. Fang
86
148
0
25 Jul 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Yi Tian Xu
Qihang Lin
Tianbao Yang
106
11
0
04 Jul 2016
RSG: Beating Subgradient Method without Smoothness and Strong Convexity
Tianbao Yang
Qihang Lin
138
85
0
09 Dec 2015
Previous
1
2
3
...
10
11
12