ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.01626
22
29

Estimating Learnability in the Sublinear Data Regime

4 May 2018
Weihao Kong
Gregory Valiant
ArXivPDFHTML
Abstract

We consider the problem of estimating how well a model class is capable of fitting a distribution of labeled data. We show that it is often possible to accurately estimate this "learnability" even when given an amount of data that is too small to reliably learn any accurate model. Our first result applies to the setting where the data is drawn from a ddd-dimensional distribution with isotropic covariance (or known covariance), and the label of each datapoint is an arbitrary noisy function of the datapoint. In this setting, we show that with O(d)O(\sqrt{d})O(d​) samples, one can accurately estimate the fraction of the variance of the label that can be explained via the best linear function of the data. In contrast to this sublinear sample size, finding an approximation of the best-fit linear function requires on the order of ddd samples. Our sublinear sample results and approach also extend to the non-isotropic setting, where the data distribution has an (unknown) arbitrary covariance matrix: we show that, if the label yyy of point xxx is a linear function with independent noise, y=⟨x,β⟩+noisey = \langle x , \beta \rangle + noisey=⟨x,β⟩+noise with ∥β∥\|\beta \|∥β∥ bounded, the variance of the noise can be estimated to error ϵ\epsilonϵ with O(d1−1/log⁡1/ϵ)O(d^{1-1/\log{1/\epsilon}})O(d1−1/log1/ϵ) if the covariance matrix has bounded condition number, or O(d1−ϵ)O(d^{1-\sqrt{\epsilon}})O(d1−ϵ​) if there are no bounds on the condition number. We also establish that these sample complexities are optimal, to constant factors. Finally, we extend these techniques to the setting of binary classification, where we obtain analogous sample complexities for the problem of estimating the prediction error of the best linear classifier, in a natural model of binary labeled data. We demonstrate the practical viability of our approaches on several real and synthetic datasets.

View on arXiv
Comments on this paper