ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04636
  4. Cited By
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
v1v2v3v4 (latest)

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition

16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
ArXiv (abs)PDFHTML

Papers citing "Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"

50 / 588 papers shown
Title
Optimization of Graph Total Variation via Active-Set-based Combinatorial
  Reconditioning
Optimization of Graph Total Variation via Active-Set-based Combinatorial Reconditioning
Zhenzhang Ye
Thomas Möllenhoff
Tao Wu
Daniel Cremers
8
3
0
27 Feb 2020
PrIU: A Provenance-Based Approach for Incrementally Updating Regression
  Models
PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models
Yinjun Wu
V. Tannen
S. Davidson
78
37
0
26 Feb 2020
Stagewise Enlargement of Batch Size for SGD-based Learning
Stagewise Enlargement of Batch Size for SGD-based Learning
Shen-Yi Zhao
Yin-Peng Xie
Wu-Jun Li
20
1
0
26 Feb 2020
Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
  for Nonconvex Optimization
Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization
Yi Zhou
Zhe Wang
Kaiyi Ji
Yingbin Liang
Vahid Tarokh
38
8
0
26 Feb 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
  Convergence
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
105
188
0
24 Feb 2020
Global Convergence and Variance-Reduced Optimization for a Class of
  Nonconvex-Nonconcave Minimax Problems
Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems
Junchi Yang
Negar Kiyavash
Niao He
95
84
0
22 Feb 2020
Stochastic Subspace Cubic Newton Method
Stochastic Subspace Cubic Newton Method
Filip Hanzely
N. Doikov
Peter Richtárik
Y. Nesterov
48
53
0
21 Feb 2020
Data Heterogeneity Differential Privacy: From Theory to Algorithm
Data Heterogeneity Differential Privacy: From Theory to Algorithm
Yilin Kang
Jian Li
Yong Liu
Weiping Wang
65
1
0
20 Feb 2020
Input Perturbation: A New Paradigm between Central and Local
  Differential Privacy
Input Perturbation: A New Paradigm between Central and Local Differential Privacy
Yilin Kang
Yong Liu
Ben Niu
Xin-Yi Tong
Likun Zhang
Weiping Wang
64
11
0
20 Feb 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
104
79
0
19 Feb 2020
The Geometry of Sign Gradient Descent
The Geometry of Sign Gradient Descent
Lukas Balles
Fabian Pedregosa
Nicolas Le Roux
ODL
72
27
0
19 Feb 2020
A Second look at Exponential and Cosine Step Sizes: Simplicity,
  Adaptivity, and Performance
A Second look at Exponential and Cosine Step Sizes: Simplicity, Adaptivity, and Performance
Xiaoyun Li
Zhenxun Zhuang
Francesco Orabona
73
20
0
12 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
101
186
0
09 Feb 2020
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Albert Senen-Cerda
J. Sanders
57
8
0
06 Feb 2020
Complexity Guarantees for Polyak Steps with Momentum
Complexity Guarantees for Polyak Steps with Momentum
Mathieu Barré
Adrien B. Taylor
Alexandre d’Aspremont
65
27
0
03 Feb 2020
Resolving learning rates adaptively by locating Stochastic Non-Negative
  Associated Gradient Projection Points using line searches
Resolving learning rates adaptively by locating Stochastic Non-Negative Associated Gradient Projection Points using line searches
D. Kafka
D. Wilke
39
8
0
15 Jan 2020
Choosing the Sample with Lowest Loss makes SGD Robust
Choosing the Sample with Lowest Loss makes SGD Robust
Vatsal Shah
Xiaoxia Wu
Sujay Sanghavi
52
44
0
10 Jan 2020
Gradient descent algorithms for Bures-Wasserstein barycenters
Gradient descent algorithms for Bures-Wasserstein barycenters
Sinho Chewi
Tyler Maunu
Philippe Rigollet
Austin J. Stromme
58
87
0
06 Jan 2020
Convergence and sample complexity of gradient methods for the model-free
  linear quadratic regulator problem
Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Hesameddin Mohammadi
A. Zare
Mahdi Soltanolkotabi
M. Jovanović
87
124
0
26 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedMLAI4CE
287
6,315
0
10 Dec 2019
On the rate of convergence of a neural network regression estimate
  learned by gradient descent
On the rate of convergence of a neural network regression estimate learned by gradient descent
Alina Braun
Michael Kohler
Harro Walk
23
10
0
09 Dec 2019
Fast Stochastic Ordinal Embedding with Variance Reduction and Adaptive
  Step Size
Fast Stochastic Ordinal Embedding with Variance Reduction and Adaptive Step Size
Ke Ma
Jinshan Zeng
Qianqian Xu
Xiaochun Cao
Wei Liu
Yuan Yao
60
3
0
01 Dec 2019
Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for
  Non Convex Optimization
Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization
Anas Barakat
Pascal Bianchi
67
12
0
18 Nov 2019
Revisiting the Approximate Carathéodory Problem via the Frank-Wolfe
  Algorithm
Revisiting the Approximate Carathéodory Problem via the Frank-Wolfe Algorithm
Cyrille W. Combettes
Sebastian Pokutta
90
26
0
11 Nov 2019
On the Convergence of Local Descent Methods in Federated Learning
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
91
275
0
31 Oct 2019
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive
  Synchronization
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
V. Cadambe
FedML
87
201
0
30 Oct 2019
Weighted Distributed Differential Privacy ERM: Convex and Non-convex
Weighted Distributed Differential Privacy ERM: Convex and Non-convex
Yilin Kang
Yong Liu
Weiping Wang
63
10
0
23 Oct 2019
First-Order Preconditioning via Hypergradient Descent
First-Order Preconditioning via Hypergradient Descent
Theodore H. Moskovitz
Rui Wang
Janice Lan
Sanyam Kapoor
Thomas Miconi
J. Yosinski
Aditya Rawal
AI4CE
75
8
0
18 Oct 2019
Improving the convergence of SGD through adaptive batch sizes
Improving the convergence of SGD through adaptive batch sizes
Scott Sievert
Zachary B. Charles
ODL
63
8
0
18 Oct 2019
Fast and Furious Convergence: Stochastic Second Order Methods under
  Interpolation
Fast and Furious Convergence: Stochastic Second Order Methods under Interpolation
S. Meng
Sharan Vaswani
I. Laradji
Mark Schmidt
Simon Lacoste-Julien
96
34
0
11 Oct 2019
Nearly Minimal Over-Parametrization of Shallow Neural Networks
Armin Eftekhari
Chaehwan Song
Volkan Cevher
45
1
0
09 Oct 2019
Linear-Quadratic Mean-Field Reinforcement Learning: Convergence of Policy Gradient Methods
Linear-Quadratic Mean-Field Reinforcement Learning: Convergence of Policy Gradient Methods
René Carmona
Mathieu Laurière
Zongjun Tan
99
63
0
09 Oct 2019
Distributed Learning of Deep Neural Networks using Independent Subnet
  Training
Distributed Learning of Deep Neural Networks using Independent Subnet Training
John Shelton Hyatt
Cameron R. Wolfe
Michael Lee
Yuxin Tang
Anastasios Kyrillidis
Christopher M. Jermaine
OOD
92
39
0
04 Oct 2019
Stochastic gradient descent for hybrid quantum-classical optimization
Stochastic gradient descent for hybrid quantum-classical optimization
R. Sweke
Frederik Wilde
Johannes Jakob Meyer
Maria Schuld
Paul K. Fährmann
Barthélémy Meynard-Piganeau
Jens Eisert
86
240
0
02 Oct 2019
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness
  and Gossip
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip
Nicolas Loizou
60
5
0
26 Sep 2019
Differentially Private Meta-Learning
Differentially Private Meta-Learning
Jeffrey Li
M. Khodak
S. Caldas
Ameet Talwalkar
FedML
131
108
0
12 Sep 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed
  Gradients and Compressed Communication
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
80
20
0
11 Sep 2019
Communication-Censored Distributed Stochastic Gradient Descent
Communication-Censored Distributed Stochastic Gradient Descent
Weiyu Li
Tianyi Chen
Liping Li
Zhaoxian Wu
Qing Ling
55
17
0
09 Sep 2019
Proximal gradient flow and Douglas-Rachford splitting dynamics: global
  exponential stability via integral quadratic constraints
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
Sepideh Hassan-Moghaddam
Mihailo R. Jovanović
16
0
0
23 Aug 2019
Towards Better Generalization: BP-SVRG in Training Deep Neural Networks
Towards Better Generalization: BP-SVRG in Training Deep Neural Networks
Hao Jin
Dachao Lin
Zhihua Zhang
ODL
35
2
0
18 Aug 2019
Path Length Bounds for Gradient Descent and Flow
Path Length Bounds for Gradient Descent and Flow
Chirag Gupta
Sivaraman Balakrishnan
Aaditya Ramdas
132
15
0
02 Aug 2019
On the Theory of Policy Gradient Methods: Optimality, Approximation, and
  Distribution Shift
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift
Alekh Agarwal
Sham Kakade
Jason D. Lee
G. Mahajan
83
321
0
01 Aug 2019
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Lénaïc Chizat
94
93
0
24 Jul 2019
signADAM: Learning Confidences for Deep Neural Networks
signADAM: Learning Confidences for Deep Neural Networks
Dong Wang
Yicheng Liu
Wenwo Tang
Fanhua Shang
Hongying Liu
Qigong Sun
Licheng Jiao
ODLFedML
27
1
0
21 Jul 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite
  Nonconvex Optimization
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
78
51
0
08 Jul 2019
The Role of Memory in Stochastic Optimization
The Role of Memory in Stochastic Optimization
Antonio Orvieto
Jonas Köhler
Aurelien Lucchi
92
30
0
02 Jul 2019
Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
Oliver Hinder
Aaron Sidford
N. Sohoni
67
72
0
27 Jun 2019
A Stochastic Composite Gradient Method with Incremental Variance
  Reduction
A Stochastic Composite Gradient Method with Incremental Variance Reduction
Junyu Zhang
Lin Xiao
59
68
0
24 Jun 2019
Tensor Canonical Correlation Analysis with Convergence and Statistical
  Guarantees
Tensor Canonical Correlation Analysis with Convergence and Statistical Guarantees
You-Lin Chen
Mladen Kolar
R. Tsay
48
0
0
12 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
70
37
0
09 Jun 2019
Previous
123...1011129
Next