ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04636
  4. Cited By
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
v1v2v3v4 (latest)

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition

16 August 2016
Hamed Karimi
J. Nutini
Mark Schmidt
ArXiv (abs)PDFHTML

Papers citing "Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition"

50 / 588 papers shown
Title
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Rui Zhu
Di Tang
Siyuan Tang
Guanhong Tao
Shiqing Ma
Xiaofeng Wang
Haixu Tang
DD
59
4
0
29 Jan 2023
Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by
  Coordinate Descent
Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by Coordinate Descent
Sara Venturini
Andrea Cristofari
Francesco Rinaldi
Francesco Tudisco
58
2
0
28 Jan 2023
Theoretical Analysis of Offline Imitation With Supplementary Dataset
Theoretical Analysis of Offline Imitation With Supplementary Dataset
Ziniu Li
Tian Xu
Y. Yu
Zhixun Luo
OffRL
60
2
0
27 Jan 2023
Understanding Incremental Learning of Gradient Descent: A Fine-grained
  Analysis of Matrix Sensing
Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing
Jikai Jin
Zhiyuan Li
Kaifeng Lyu
S. Du
Jason D. Lee
MLT
103
37
0
27 Jan 2023
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality
Lu Xia
M. Hochstenbach
Stefano Massei
97
2
0
23 Jan 2023
Convergence beyond the over-parameterized regime using Rayleigh
  quotients
Convergence beyond the over-parameterized regime using Rayleigh quotients
David A. R. Robin
Kevin Scaman
Marc Lelarge
60
3
0
19 Jan 2023
Learning Partial Differential Equations by Spectral Approximates of
  General Sobolev Spaces
Learning Partial Differential Equations by Spectral Approximates of General Sobolev Spaces
Juan Esteban Suarez Cardona
Phil-Alexander Hofmann
Michael Hecht
93
3
0
12 Jan 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
87
2
0
09 Jan 2023
Restarts subject to approximate sharpness: A parameter-free and optimal
  scheme for first-order methods
Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods
Ben Adcock
Matthew J. Colbrook
Maksym Neyra-Nesterenko
64
2
0
05 Jan 2023
On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness
  Results and Improved Analysis
On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis
Le‐Yu Chen
Jing Xu
J.N. Zhang
106
14
0
02 Jan 2023
Stochastic Variable Metric Proximal Gradient with variance reduction for
  non-convex composite optimization
Stochastic Variable Metric Proximal Gradient with variance reduction for non-convex composite optimization
G. Fort
Eric Moulines
84
6
0
02 Jan 2023
Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave
  Minimax Optimization
Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave Minimax Optimization
Taoli Zheng
Lingling Zhu
Anthony Man-Cho So
Jose H. Blanchet
Jiajin Li
119
15
0
26 Dec 2022
Iterative regularization in classification via hinge loss diagonal
  descent
Iterative regularization in classification via hinge loss diagonal descent
Vassilis Apidopoulos
T. Poggio
Lorenzo Rosasco
S. Villa
56
2
0
24 Dec 2022
Gradient Descent-Type Methods: Background and Simple Unified Convergence
  Analysis
Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
Quoc Tran-Dinh
Marten van Dijk
53
0
0
19 Dec 2022
Cyclic Block Coordinate Descent With Variance Reduction for Composite
  Nonconvex Optimization
Cyclic Block Coordinate Descent With Variance Reduction for Composite Nonconvex Optimization
Xu Cai
Chaobing Song
Stephen J. Wright
Jelena Diakonikolas
73
14
0
09 Dec 2022
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
  Evasion of Non-Degenerate Saddle Points
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points
Mayank Baranwal
Param Budhraja
V. Raj
A. Hota
58
3
0
07 Dec 2022
Scalable Hierarchical Over-the-Air Federated Learning
Scalable Hierarchical Over-the-Air Federated Learning
Seyed Mohammad Azimi-Abarghouyi
Viktoria Fodor
45
11
0
29 Nov 2022
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class
  of Nonconvex-Nonconcave Minimax Problems
Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class of Nonconvex-Nonconcave Minimax Problems
Zi Xu
Ziqi Wang
Junlin Wang
Y. Dai
99
11
0
24 Nov 2022
Adaptive Federated Minimax Optimization with Lower Complexities
Adaptive Federated Minimax Optimization with Lower Complexities
Feihu Huang
Xinrui Wang
Junyi Li
Songcan Chen
FedML
107
5
0
14 Nov 2022
Regularized Rényi divergence minimization through Bregman proximal
  gradient algorithms
Regularized Rényi divergence minimization through Bregman proximal gradient algorithms
Thomas Guilmeau
Émilie Chouzenoux
Victor Elvira
75
3
0
09 Nov 2022
Neural PDE Solvers for Irregular Domains
Neural PDE Solvers for Irregular Domains
Biswajit Khara
Ethan Herron
Zhanhong Jiang
Aditya Balu
Chih-Hsuan Yang
...
Anushrut Jignasu
Soumik Sarkar
Chinmay Hegde
A. Krishnamurthy
Baskar Ganapathysubramanian
AI4CE
50
9
0
07 Nov 2022
Convergence Rates of Stochastic Zeroth-order Gradient Descent for Ł
  ojasiewicz Functions
Convergence Rates of Stochastic Zeroth-order Gradient Descent for Ł ojasiewicz Functions
Tianyu Wang
Yasong Feng
32
1
0
31 Oct 2022
Optimization for Amortized Inverse Problems
Optimization for Amortized Inverse Problems
Tianci Liu
Tong Yang
Quan Zhang
Qi Lei
84
6
0
25 Oct 2022
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Mengzhe Ruan
Guangfeng Yan
Yuanzhang Xiao
Linqi Song
Weitao Xu
69
3
0
24 Oct 2022
Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic
  Decentralized Optimization
Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization
Kun Yuan
Xinmeng Huang
Yiming Chen
Xiaohan Zhang
Yingya Zhang
Pan Pan
70
21
0
14 Oct 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
42
4
0
13 Oct 2022
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax
  optimization
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization
Hanseul Cho
Chulhee Yun
60
9
0
12 Oct 2022
Towards a Theoretical Foundation of Policy Optimization for Learning
  Control Policies
Towards a Theoretical Foundation of Policy Optimization for Learning Control Policies
Bin Hu
Kai Zhang
Na Li
M. Mesbahi
Maryam Fazel
Tamer Bacsar
161
27
0
10 Oct 2022
On skip connections and normalisation layers in deep optimisation
On skip connections and normalisation layers in deep optimisation
L. MacDonald
Jack Valmadre
Hemanth Saratchandran
Simon Lucey
ODL
63
2
0
10 Oct 2022
Spectral Regularization Allows Data-frugal Learning over Combinatorial
  Spaces
Spectral Regularization Allows Data-frugal Learning over Combinatorial Spaces
Amirali Aghazadeh
Nived Rajaraman
Tony Tu
Kannan Ramchandran
65
2
0
05 Oct 2022
Over-the-Air Federated Learning with Privacy Protection via Correlated
  Additive Perturbations
Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations
Jialing Liao
Zheng Chen
Erik G. Larsson
89
13
0
05 Oct 2022
SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity
  in Federated Min-Max Learning
SAGDA: Achieving O(ε−2)\mathcal{O}(ε^{-2})O(ε−2) Communication Complexity in Federated Min-Max Learning
Haibo Yang
Zhuqing Liu
Xin Zhang
Jia-Wei Liu
FedML
94
0
0
02 Oct 2022
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis
  Function Decomposition
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition
Jianhao Ma
Li-Zhen Guo
Salar Fattahi
73
4
0
01 Oct 2022
Restricted Strong Convexity of Deep Learning Models with Smooth
  Activations
Restricted Strong Convexity of Deep Learning Models with Smooth Activations
A. Banerjee
Pedro Cisneros-Velarde
Libin Zhu
M. Belkin
70
8
0
29 Sep 2022
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
  with List Stability
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability
Peisong Wen
Qianqian Xu
Zhiyong Yang
Yuan He
Qingming Huang
128
10
0
27 Sep 2022
Convergence rate of the (1+1)-evolution strategy on locally strongly
  convex functions with lipschitz continuous gradient and their monotonic
  transformations
Convergence rate of the (1+1)-evolution strategy on locally strongly convex functions with lipschitz continuous gradient and their monotonic transformations
Daiki Morinaga
Kazuto Fukuchi
Jun Sakuma
Youhei Akimoto
14
4
0
26 Sep 2022
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Mao Ye
B. Liu
S. Wright
Peter Stone
Qian Liu
118
87
0
19 Sep 2022
Efficiency Ordering of Stochastic Gradient Descent
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
70
7
0
15 Sep 2022
Private Stochastic Optimization With Large Worst-Case Lipschitz
  Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to
  Non-Convex Losses
Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses
Andrew Lowy
Meisam Razaviyayn
86
13
0
15 Sep 2022
Statistical Learning Theory for Control: A Finite Sample Perspective
Statistical Learning Theory for Control: A Finite Sample Perspective
Anastasios Tsiamis
Ingvar M. Ziemann
Nikolai Matni
George J. Pappas
227
77
0
12 Sep 2022
Optimizing the Performative Risk under Weak Convexity Assumptions
Optimizing the Performative Risk under Weak Convexity Assumptions
Yulai Zhao
65
5
0
02 Sep 2022
Versatile Single-Loop Method for Gradient Estimator: First and Second
  Order Optimality, and its Application to Federated Learning
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
Kazusato Oko
Shunta Akiyama
Tomoya Murata
Taiji Suzuki
FedML
72
0
0
01 Sep 2022
Asynchronous Training Schemes in Distributed Learning with Time Delay
Asynchronous Training Schemes in Distributed Learning with Time Delay
Haoxiang Wang
Zhanhong Jiang
Chao Liu
Soumik Sarkar
D. Jiang
Young M. Lee
46
2
0
28 Aug 2022
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex
  Optimization
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
96
6
0
22 Aug 2022
Adaptive Learning Rates for Faster Stochastic Gradient Methods
Adaptive Learning Rates for Faster Stochastic Gradient Methods
Samuel Horváth
Konstantin Mishchenko
Peter Richtárik
ODL
56
9
0
10 Aug 2022
Improved Policy Optimization for Online Imitation Learning
Improved Policy Optimization for Online Imitation Learning
J. Lavington
Sharan Vaswani
Mark Schmidt
OffRL
79
6
0
29 Jul 2022
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation
  and Complexity Analysis
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis
Tao Li
Haozhe Lei
Quanyan Zhu
AAML
98
7
0
29 Jul 2022
Fixed-Time Convergence for a Class of Nonconvex-Nonconcave Min-Max
  Problems
Fixed-Time Convergence for a Class of Nonconvex-Nonconcave Min-Max Problems
Kunal Garg
Mayank Baranwal
13
1
0
26 Jul 2022
Multi-block-Single-probe Variance Reduced Estimator for Coupled
  Compositional Optimization
Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization
Wei Jiang
Gang Li
Yibo Wang
Lijun Zhang
Tianbao Yang
113
18
0
18 Jul 2022
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and
  Theoretical Analysis
Training Robust Deep Models for Time-Series Domain: Novel Algorithms and Theoretical Analysis
Taha Belkhouja
Yan Yan
J. Doppa
OODAI4TS
61
9
0
09 Jul 2022
Previous
123456...101112
Next