124
v1v2 (latest)

Exploiting the Sensitivity of L2L_2 Adversarial Examples to Erase-and-Restore

Abstract

By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-network-based image classifiers. L2L_2 adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW-L2L_2 AEs is still an open question. We find that, by randomly erasing some pixels in an L2L_2 AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of L2L_2 attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of L2L_2 AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW-L2L_2 AEs can accurately detect AEs generated using another L2L_2 attack method. More importantly, our approach demonstrates strong resilience to adaptive L2L_2 attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.

View on arXiv
Comments on this paper