17
5

Corruption Robust Active Learning

Abstract

We conduct theoretical studies on streaming-based active learning for binary classification under unknown adversarial label corruptions. In this setting, every time before the learner observes a sample, the adversary decides whether to corrupt the label or not. First, we show that, in a benign corruption setting (which includes the misspecification setting as a special case), with a slight enlargement on the hypothesis elimination threshold, the classical RobustCAL framework can (surprisingly) achieve nearly the same label complexity guarantee as in the non-corrupted setting. However, this algorithm can fail in the general corruption setting. To resolve this drawback, we propose a new algorithm which is provably correct without any assumptions on the presence of corruptions. Furthermore, this algorithm enjoys the minimax label complexity in the non-corrupted setting (which is achieved by RobustCAL) and only requires O~(Ctotal)\tilde{\mathcal{O}}(C_{\mathrm{total}}) additional labels in the corrupted setting to achieve O(ε+Ctotaln)\mathcal{O}(\varepsilon + \frac{C_{\mathrm{total}}}{n}), where ε\varepsilon is the target accuracy, CtotalC_{\mathrm{total}} is the total number of corruptions and nn is the total number of unlabeled samples.

View on arXiv
Comments on this paper