ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.05050
25
2
v1v2v3 (latest)

The unstable formula theorem revisited

9 December 2022
M. Malliaris
Shay Moran
ArXiv (abs)PDFHTML
Main:1 Pages
Appendix:24 Pages
Abstract

We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.

View on arXiv
@article{malliaris2025_2212.05050,
  title={ The unstable formula theorem revisited via algorithms },
  author={ Maryanthe Malliaris and Shay Moran },
  journal={arXiv preprint arXiv:2212.05050},
  year={ 2025 }
}
Comments on this paper