Provable Robustness of Adversarial Training for Learning Halfspaces with Noise

We analyze the properties of adversarial training for learning adversarially robust halfspaces in the presence of agnostic label noise. Denoting as the best robust classification error achieved by a halfspace that is robust to perturbations of balls of radius , we show that adversarial training on the standard binary cross-entropy loss yields adversarially robust halfspaces up to (robust) classification error for , and when . Our results hold for distributions satisfying anti-concentration properties enjoyed by log-concave isotropic distributions among others. We additionally show that if one instead uses a nonconvex sigmoidal loss, adversarial training yields halfspaces with an improved robust classification error of for , and when . To the best of our knowledge, this is the first work to show that adversarial training provably yields robust classifiers in the presence of noise.
View on arXiv