211
v1v2 (latest)

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

International Conference on Machine Learning (ICML), 2020
Abstract

Randomized smoothing, using just a simple isotropic Gaussian distribution, has been shown to produce good robustness guarantees against 2\ell_2-norm bounded adversaries. In this work, we show that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime. In particular, for a vast class of i.i.d.~smoothing distributions, we prove that the largest p\ell_p-radius that can be certified decreases as O(1/d121p)O(1/d^{\frac{1}{2} - \frac{1}{p}}) with dimension dd for p>2p > 2. Notably, for p2p \geq 2, this dependence on dd is no better than that of the p\ell_p-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius. When restricted to {\it generalized} Gaussian smoothing, these two bounds can be shown to be within a constant factor of each other in an asymptotic sense, establishing that Gaussian smoothing provides the best possible results, up to a constant factor, when p2p \geq 2. We present experimental results on CIFAR to validate our theory. For other smoothing distributions, such as, a uniform distribution within an 1\ell_1 or an \ell_\infty-norm ball, we show upper bounds of the form O(1/d)O(1 / d) and O(1/d11p)O(1 / d^{1 - \frac{1}{p}}) respectively, which have an even worse dependence on dd.

View on arXiv
Comments on this paper