ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15657
103
0

LCDB 1.1: A Database Illustrating Learning Curves Are More Ill-Behaved Than Previously Thought

21 May 2025
Cheng Yan
Felix Mohr
Tom Viering
ArXiv (abs)PDFHTML
Main:13 Pages
19 Figures
Bibliography:4 Pages
9 Tables
Appendix:15 Pages
Abstract

Sample-wise learning curves plot performance versus training set size. They are useful for studying scaling laws and speeding up hyperparameter tuning and model selection. Learning curves are often assumed to be well-behaved: monotone (i.e. improving with more data) and convex. By constructing the Learning Curves Database 1.1 (LCDB 1.1), a large-scale database with high-resolution learning curves, we show that learning curves are less often well-behaved than previously thought. Using statistically rigorous methods, we observe significant ill-behavior in approximately 14% of the learning curves, almost twice as much as in previous estimates. We also identify which learners are to blame and show that specific learners are more ill-behaved than others. Additionally, we demonstrate that different feature scalings rarely resolve ill-behavior. We evaluate the impact of ill-behavior on downstream tasks, such as learning curve fitting and model selection, and find it poses significant challenges, underscoring the relevance and potential of LCDB 1.1 as a challenging benchmark for future research.

View on arXiv
@article{yan2025_2505.15657,
  title={ LCDB 1.1: A Database Illustrating Learning Curves Are More Ill-Behaved Than Previously Thought },
  author={ Cheng Yan and Felix Mohr and Tom Viering },
  journal={arXiv preprint arXiv:2505.15657},
  year={ 2025 }
}
Comments on this paper