Active Learning of General Halfspaces: Label Queries vs Membership Queries
We study the problem of learning general (i.e., not necessarily homogeneous) halfspaces under the Gaussian distribution on in the presence of some form of query access. In the classical pool-based active learning model, where the algorithm is allowed to make adaptive label queries to previously sampled points, we establish a strong information-theoretic lower bound ruling out non-trivial improvements over the passive setting. Specifically, we show that any active learner requires label complexity of , where is the number of unlabeled examples. Specifically, to beat the passive label complexity of , an active learner requires a pool of unlabeled samples. On the positive side, we show that this lower bound can be circumvented with membership query access, even in the agnostic model. Specifically, we give a computationally efficient learner with query complexity of achieving error guarantee of . Here is the bias and is the 0-1 loss of the optimal halfspace. As a corollary, we obtain a strong separation between the active and membership query models. Taken together, our results characterize the complexity of learning general halfspaces under Gaussian marginals in these models.
View on arXiv