11
9

Stochastic sparse adversarial attacks

Abstract

This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or L0L_0) attacks for which only few methods have been proposed previously. These attacks are devised by exploiting a small-time expansion idea widely used for Markov processes. Experiments on small and large datasets (CIFAR-10 and ImageNet) illustrate several advantages of SSAA in comparison with the-state-of-the-art methods. For instance, in the untargeted case, our method called Voting Folded Gaussian Attack (VFGA) scales efficiently to ImageNet and achieves a significantly lower L0L_0 score than SparseFool (up to 25\frac{2}{5}) while being faster. Moreover, VFGA achieves better L0L_0 scores on ImageNet than Sparse-RS when both attacks are fully successful on a large number of samples.

View on arXiv
Comments on this paper