Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1605.08003
Cited By
Tight Complexity Bounds for Optimizing Composite Objectives
25 May 2016
Blake E. Woodworth
Nathan Srebro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Tight Complexity Bounds for Optimizing Composite Objectives"
34 / 34 papers shown
Title
Memory-Query Tradeoffs for Randomized Convex Optimization
X. Chen
Binghui Peng
36
6
0
21 Jun 2023
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
19
11
0
15 Apr 2023
Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments
Quoc Tran-Dinh
35
7
0
30 Mar 2023
Bayesian Optimization for Function Compositions with Applications to Dynamic Pricing
Kunal Jain
J. PrabuchandranK.
Tejas Bodas
14
2
0
21 Mar 2023
Stochastic Steffensen method
Minda Zhao
Zehua Lai
Lek-Heng Lim
ODL
15
3
0
28 Nov 2022
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
50
11
0
17 Jun 2022
Efficient Convex Optimization Requires Superlinear Memory
A. Marsden
Vatsal Sharan
Aaron Sidford
Gregory Valiant
26
14
0
29 Mar 2022
Distributionally Robust Optimization via Ball Oracle Acceleration
Y. Carmon
Danielle Hausler
18
11
0
24 Mar 2022
Stochastic Primal-Dual Deep Unrolling
Junqi Tang
Subhadip Mukherjee
Carola-Bibiane Schönlieb
22
4
0
19 Oct 2021
Stochastic Bias-Reduced Gradient Methods
Hilal Asi
Y. Carmon
A. Jambulapati
Yujia Jin
Aaron Sidford
18
29
0
17 Jun 2021
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
33
61
0
29 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
36
14
0
21 Mar 2021
Machine Unlearning via Algorithmic Stability
Enayat Ullah
Tung Mai
Anup B. Rao
Ryan Rossi
R. Arora
27
101
0
25 Feb 2021
Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques
Filip Hanzely
Boxin Zhao
Mladen Kolar
FedML
24
52
0
19 Feb 2021
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
38
186
0
05 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
26
125
0
25 Aug 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi-An Ma
50
23
0
18 Jun 2020
Minibatch vs Local SGD for Heterogeneous Distributed Learning
Blake E. Woodworth
Kumar Kshitij Patel
Nathan Srebro
FedML
22
198
0
08 Jun 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
The Practicality of Stochastic Optimization in Imaging Inverse Problems
Junqi Tang
K. Egiazarian
Mohammad Golbabaee
Mike Davies
25
30
0
22 Oct 2019
Semi-Cyclic Stochastic Gradient Descent
Hubert Eichner
Tomer Koren
H. B. McMahan
Nathan Srebro
Kunal Talwar
22
106
0
23 Apr 2019
Lower Bounds for Parallel and Randomized Convex Optimization
Jelena Diakonikolas
Cristóbal Guzmán
30
44
0
05 Nov 2018
Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization
Eric Balkanski
Yaron Singer
14
31
0
12 Aug 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
33
569
0
04 Jul 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
Max Simchowitz
A. Alaoui
Benjamin Recht
25
38
0
04 Apr 2018
Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates
Arnulf Jentzen
Philippe von Wurstemberger
73
31
0
22 Mar 2018
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Tomoya Murata
Taiji Suzuki
OffRL
27
28
0
01 Mar 2017
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
22
1,876
0
08 Oct 2016
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
18
95
0
12 Sep 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
15
575
0
18 Mar 2016
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
23
220
0
08 Jul 2015
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
35
261
0
10 Sep 2014
1