Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1407.2710
Cited By
Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems
10 July 2014
Aaron Defazio
T. Caetano
Justin Domke
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems"
39 / 39 papers shown
Title
A Coefficient Makes SVRG Effective
Yida Yin
Zhiqiu Xu
Zhiyuan Li
Trevor Darrell
Zhuang Liu
44
1
0
09 Nov 2023
SPIRAL: A superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimization
Pourya Behmandpoor
P. Latafat
Andreas Themelis
Marc Moonen
Panagiotis Patrinos
34
2
0
17 Jul 2022
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
29
10
0
08 May 2022
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
Bugra Can
Saeed Soori
M. Dehnavi
Mert Gurbuzbalaban
45
2
0
20 Aug 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
Federated Stochastic Gradient Langevin Dynamics
Khaoula El Mekkaoui
Diego Mesquita
P. Blomstedt
Samuel Kaski
FedML
37
24
0
23 Apr 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
39
78
0
19 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
40
17
0
11 Feb 2020
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
A Unifying Framework for Variance Reduction Algorithms for Finding Zeroes of Monotone Operators
Xun Zhang
W. Haskell
Z. Ye
25
3
0
22 Jun 2019
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
20
2
0
21 Mar 2019
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
34
44
0
25 Jan 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
36
155
0
24 Jan 2019
On the Ineffectiveness of Variance Reduced Optimization for Deep Learning
Aaron Defazio
Léon Bottou
UQCV
DRL
23
112
0
11 Dec 2018
On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches
Jie Liu
Yu Rong
Martin Takáč
Junzhou Huang
ODL
38
7
0
14 Jul 2018
A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm
Konstantin Mishchenko
F. Iutzeler
J. Malick
19
22
0
25 Jun 2018
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
Stochastic Variance-Reduced Policy Gradient
Matteo Papini
Damiano Binaghi
Giuseppe Canonaco
Matteo Pirotta
Marcello Restelli
19
174
0
14 Jun 2018
Analysis of Biased Stochastic Gradient Descent Using Sequential Semidefinite Programs
Bin Hu
Peter M. Seiler
Laurent Lessard
24
39
0
03 Nov 2017
Variance-Reduced Stochastic Learning under Random Reshuffling
Bicheng Ying
Kun Yuan
Ali H. Sayed
31
13
0
04 Aug 2017
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Bin Hu
Peter M. Seiler
Anders Rantzer
30
35
0
25 Jun 2017
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method
Mark Eisen
Aryan Mokhtari
Alejandro Ribeiro
35
16
0
22 May 2017
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
11
94
0
20 May 2017
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
37
36
0
01 Nov 2016
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De
A. Yadav
David Jacobs
Tom Goldstein
ODL
37
62
0
18 Oct 2016
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
A. Bietti
Julien Mairal
47
36
0
04 Oct 2016
An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration
Hongzhou Lin
Julien Mairal
Zaïd Harchaoui
33
13
0
04 Oct 2016
Trading-off variance and complexity in stochastic gradient descent
Vatsal Shah
Megasthenis Asteris
Anastasios Kyrillidis
Sujay Sanghavi
25
13
0
22 Mar 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
35
577
0
18 Mar 2016
Variance Reduction for Faster Non-Convex Optimization
Zeyuan Allen-Zhu
Elad Hazan
ODL
32
390
0
17 Mar 2016
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
30
121
0
08 Feb 2016
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters
Zeyuan Allen-Zhu
Yang Yuan
Karthik Sridharan
20
27
0
05 Feb 2016
New Optimisation Methods for Machine Learning
Aaron Defazio
46
6
0
09 Oct 2015
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
40
194
0
23 Jun 2015
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Zeyuan Allen-Zhu
Yang Yuan
31
195
0
05 Jun 2015
SDCA without Duality
Shai Shalev-Shwartz
27
47
0
22 Feb 2015
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
114
1,244
0
10 Sep 2013
1