Computational-Statistical Tradeoffs from NP-hardness
A central question in computer science and statistics is whether efficient algorithms can achieve the information-theoretic limits of statistical problems. Many computational-statistical tradeoffs have been shown under average-case assumptions, but since statistical problems are average-case in nature, it has been a challenge to base them on standard worst-case assumptions.In PAC learning where such tradeoffs were first studied, the question is whether computational efficiency can come at the cost of using more samples than information-theoretically necessary. We base such tradeoffs on -hardness and obtain: Sharp computational-statistical tradeoffs assuming requires exponential time: For every polynomial , there is an -variate class with VC dimension such that the sample complexity of time-efficiently learning is . A characterization of vs. in terms of learning: iff every -enumerable class is learnable with samples in polynomial time. The forward implication has been known since (Pitt and Valiant, 1988); we prove the reverse implication.Notably, all our lower bounds hold against improper learners. These are the first -hardness results for improperly learning a subclass of polynomial-size circuits, circumventing formal barriers of Applebaum, Barak, and Xiao (2008).
View on arXiv