28
31

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Abstract

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class C\mathcal{C} using any non-robust learner A\mathcal{A} for C\mathcal{C}. The number of calls to A\mathcal{A} depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.

View on arXiv
Comments on this paper