ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.08668
  4. Cited By
Beating SGD Saturation with Tail-Averaging and Minibatching

Beating SGD Saturation with Tail-Averaging and Minibatching

22 February 2019
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
ArXivPDFHTML

Papers citing "Beating SGD Saturation with Tail-Averaging and Minibatching"

8 / 8 papers shown
Title
Iterative regularization in classification via hinge loss diagonal
  descent
Iterative regularization in classification via hinge loss diagonal descent
Vassilis Apidopoulos
T. Poggio
Lorenzo Rosasco
S. Villa
34
2
0
24 Dec 2022
Online Regularized Learning Algorithm for Functional Data
Online Regularized Learning Algorithm for Functional Data
Yuan Mao
Zheng-Chu Guo
21
4
0
24 Nov 2022
Provable Generalization of Overparameterized Meta-learning Trained with
  SGD
Provable Generalization of Overparameterized Meta-learning Trained with SGD
Yu Huang
Yingbin Liang
Longbo Huang
MLT
35
8
0
18 Jun 2022
Improved Learning Rates for Stochastic Optimization: Two Theoretical
  Viewpoints
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
26
13
0
19 Jul 2021
From inexact optimization to learning via gradient concentration
From inexact optimization to learning via gradient concentration
Bernhard Stankewitz
Nicole Mücke
Lorenzo Rosasco
31
5
0
09 Jun 2021
Fine-Grained Analysis of Stability and Generalization for Stochastic
  Gradient Descent
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
Yunwen Lei
Yiming Ying
MLT
43
126
0
15 Jun 2020
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Simon Fischer
Ingo Steinwart
41
148
0
23 Feb 2017
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
104
572
0
08 Dec 2012
1