ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.01584
34
5

Estimating decision tree learnability with polylogarithmic sample complexity

3 November 2020
Guy Blanc
Neha Gupta
Jane Lange
Li-Yang Tan
    TPM
ArXiv (abs)PDFHTML
Abstract

We show that top-down decision tree learning heuristics are amenable to highly efficient learnability estimation: for monotone target functions, the error of the decision tree hypothesis constructed by these heuristics can be estimated with polylogarithmically many labeled examples, exponentially smaller than the number necessary to run these heuristics, and indeed, exponentially smaller than information-theoretic minimum required to learn a good decision tree. This adds to a small but growing list of fundamental learning algorithms that have been shown to be amenable to learnability estimation. En route to this result, we design and analyze sample-efficient minibatch versions of top-down decision tree learning heuristics and show that they achieve the same provable guarantees as the full-batch versions. We further give "active local" versions of these heuristics: given a test point x⋆x^\starx⋆, we show how the label T(x⋆)T(x^\star)T(x⋆) of the decision tree hypothesis TTT can be computed with polylogarithmically many labeled examples, exponentially smaller than the number necessary to learn TTT.

View on arXiv
Comments on this paper