51
1
v1v2v3 (latest)

A Competitive Algorithm for Agnostic Active Learning

Abstract

For some hypothesis classes and input distributions, active agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs. We take a different approach to agnostic active learning, getting an algorithm that is competitive with the optimal algorithm for any binary hypothesis class HH and distribution DXD_X over XX. In particular, if any algorithm can use mm^* queries to get O(η)O(\eta) error, then our algorithm uses O(mlogH)O(m^* \log |H|) queries to get O(η)O(\eta) error. Our algorithm lies in the vein of the splitting-based approach of Dasgupta [2004], which gets a similar result for the realizable (η=0\eta = 0) setting. We also show that it is NP-hard to do better than our algorithm's O(logH)O(\log |H|) overhead in general.

View on arXiv
Comments on this paper