Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.07288
Cited By
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
16 October 2018
Sharan Vaswani
Francis R. Bach
Mark W. Schmidt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron"
6 / 56 papers shown
Title
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
11
178
0
09 Feb 2020
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
25
76
0
19 Nov 2019
Linear Lower Bounds and Conditioning of Differentiable Games
Adam Ibrahim
Waïss Azizian
Gauthier Gidel
Ioannis Mitliagkas
23
10
0
17 Jun 2019
Reducing the variance in online optimization by transporting past gradients
Sébastien M. R. Arnold
Pierre-Antoine Manzagol
Reza Babanezhad
Ioannis Mitliagkas
Nicolas Le Roux
9
28
0
08 Jun 2019
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it
Konstantin Mishchenko
Filip Hanzely
Peter Richtárik
11
13
0
27 Jan 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
127
1,198
0
16 Aug 2016
Previous
1
2