ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1405.0042
71
14
v1v2 (latest)

Regularization by Early Stopping for Online Learning Algorithms

30 April 2014
Lorenzo Rosasco
S. Villa
Silvia Villa
ArXiv (abs)PDFHTML
Abstract

We study the learning algorithm corresponding to the incremental gradient descent defined by the empirical risk over an infinite dimensional hypotheses space. We consider a statistical learning setting and show that, provided with a universal step-size and a suitable early stopping rule, the learning algorithm thus obtained is universally consistent and derive finite sample bounds. Our results provide a theoretical foundation for considering early stopping in online learning algorithms and shed light on the effect of allowing for multiple passes over the data.

View on arXiv
Comments on this paper