228

Adversarially Robust Learning with Tolerance

International Conference on Algorithmic Learning Theory (ALT), 2022
Abstract

We study the problem of tolerant adversarial PAC learning with respect to metric perturbation sets. In adversarial PAC learning, an adversary is allowed to replace a test point xx with an arbitrary point in a closed ball of radius rr centered at xx. In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius (1+γ)r(1+\gamma)r. For perturbation sets with doubling dimension dd, we show that a variant of the natural ``perturb-and-smooth'' algorithm PAC learns any hypothesis class H\mathcal{H} with VC dimension vv in the γ\gamma-tolerant adversarial setting with O(v(1+1/γ)O(d)ε)O\left(\frac{v(1+1/\gamma)^{O(d)}}{\varepsilon}\right) samples. This is the first such general guarantee with linear dependence on vv even for the special case where the domain is the real line and the perturbation sets are closed balls (intervals) of radius rr. However, the proposed guarantees for the perturb-and-smooth algorithm currently only hold in the tolerant robust realizable setting and exhibit exponential dependence on dd. We additionally propose an alternative learning method which yields sample complexity bounds with only linear dependence on the doubling dimension even in the more general agnostic case. This approach is based on sample compression.

View on arXiv
Comments on this paper