Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.08668
Cited By
Beating SGD Saturation with Tail-Averaging and Minibatching
22 February 2019
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Beating SGD Saturation with Tail-Averaging and Minibatching"
8 / 8 papers shown
Title
Iterative regularization in classification via hinge loss diagonal descent
Vassilis Apidopoulos
T. Poggio
Lorenzo Rosasco
S. Villa
34
2
0
24 Dec 2022
Online Regularized Learning Algorithm for Functional Data
Yuan Mao
Zheng-Chu Guo
21
4
0
24 Nov 2022
Provable Generalization of Overparameterized Meta-learning Trained with SGD
Yu Huang
Yingbin Liang
Longbo Huang
MLT
35
8
0
18 Jun 2022
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
26
13
0
19 Jul 2021
From inexact optimization to learning via gradient concentration
Bernhard Stankewitz
Nicole Mücke
Lorenzo Rosasco
31
5
0
09 Jun 2021
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
Yunwen Lei
Yiming Ying
MLT
43
126
0
15 Jun 2020
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Simon Fischer
Ingo Steinwart
41
148
0
23 Feb 2017
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
104
572
0
08 Dec 2012
1