ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1410.0723
  4. Cited By
A Lower Bound for the Optimization of Finite Sums

A Lower Bound for the Optimization of Finite Sums

2 October 2014
Alekh Agarwal
Léon Bottou
ArXivPDFHTML

Papers citing "A Lower Bound for the Optimization of Finite Sums"

32 / 32 papers shown
Title
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
36
7
0
12 May 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
27
11
0
15 Apr 2023
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
35
61
0
29 Mar 2021
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
FedML
62
187
0
05 Oct 2020
Effective Proximal Methods for Non-convex Non-smooth Regularized
  Learning
Effective Proximal Methods for Non-convex Non-smooth Regularized Learning
Guannan Liang
Qianqian Tong
Jiahao Ding
Miao Pan
J. Bi
24
0
0
14 Sep 2020
A Survey on Large-scale Machine Learning
A Survey on Large-scale Machine Learning
Meng Wang
Weijie Fu
Xiangnan He
Shijie Hao
Xindong Wu
25
110
0
10 Aug 2020
Variance Reduction for Deep Q-Learning using Stochastic Recursive
  Gradient
Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient
Hao Jia
Xiao Zhang
Jun Xu
Wei Zeng
Hao Jiang
Xiao Yan
Ji-Rong Wen
25
3
0
25 Jul 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
38
72
0
15 Jun 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
General Proximal Incremental Aggregated Gradient Algorithms: Better and
  Novel Results under General Scheme
General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme
Tao Sun
Yuejiao Sun
Dongsheng Li
Qing Liao
35
16
0
11 Oct 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite
  Nonconvex Optimization
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
On the Adaptivity of Stochastic Gradient-Based Optimization
On the Adaptivity of Stochastic Gradient-Based Optimization
Lihua Lei
Michael I. Jordan
ODL
21
22
0
09 Apr 2019
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with
  Curvature Independent Rate
R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate
J.N. Zhang
Hongyi Zhang
S. Sra
26
39
0
10 Nov 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed
  Wigner Law
Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law
Max Simchowitz
A. Alaoui
Benjamin Recht
35
39
0
04 Apr 2018
Lower error bounds for the stochastic gradient descent optimization
  algorithm: Sharp convergence rates for slowly and fast decaying learning
  rates
Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates
Arnulf Jentzen
Philippe von Wurstemberger
75
31
0
22 Mar 2018
A Stochastic Trust Region Algorithm Based on Careful Step Normalization
A Stochastic Trust Region Algorithm Based on Careful Step Normalization
Frank E. Curtis
K. Scheinberg
R. Shi
35
45
0
29 Dec 2017
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
13
94
0
20 May 2017
Less than a Single Pass: Stochastically Controlled Stochastic Gradient
  Method
Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method
Lihua Lei
Michael I. Jordan
29
96
0
12 Sep 2016
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
26
139
0
27 Jul 2016
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems
Yossi Arjevani
Ohad Shamir
24
24
0
30 Jun 2016
Tight Complexity Bounds for Optimizing Composite Objectives
Tight Complexity Bounds for Optimizing Composite Objectives
Blake E. Woodworth
Nathan Srebro
36
185
0
25 May 2016
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds
Hongyi Zhang
Sashank J. Reddi
S. Sra
41
240
0
23 May 2016
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
28
54
0
23 May 2016
Fast Incremental Method for Nonconvex Optimization
Fast Incremental Method for Nonconvex Optimization
Sashank J. Reddi
S. Sra
Barnabás Póczós
Alex Smola
33
44
0
19 Mar 2016
Distributed Stochastic Variance Reduced Gradient Methods and A Lower
  Bound for Communication Complexity
Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
Jason D. Lee
Qihang Lin
Tengyu Ma
Tianbao Yang
FedML
31
16
0
27 Jul 2015
An optimal randomized incremental gradient method
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
34
220
0
08 Jul 2015
On Variance Reduction in Stochastic Gradient Descent and its
  Asynchronous Variants
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
40
194
0
23 Jun 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
Dominik Csiba
Zheng Qu
Peter Richtárik
ODL
61
97
0
27 Feb 2015
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Randomized Dual Coordinate Ascent with Arbitrary Sampling
Zheng Qu
Peter Richtárik
Tong Zhang
43
58
0
21 Nov 2014
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk
  Minimization
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
Yuchen Zhang
Xiao Lin
43
261
0
10 Sep 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
117
1,244
0
10 Sep 2013
1