16
0

Releasing Inequality Phenomena in LL_{\infty}-Adversarial Training via Input Gradient Distillation

Abstract

Since adversarial examples appeared and showed the catastrophic degradation they brought to DNN, many adversarial defense methods have been devised, among which adversarial training is considered the most effective. However, a recent work showed the inequality phenomena in ll_{\infty}-adversarial training and revealed that the ll_{\infty}-adversarially trained model is vulnerable when a few important pixels are perturbed by i.i.d. noise or occluded. In this paper, we propose a simple yet effective method called Input Gradient Distillation (IGD) to release the inequality phenomena in ll_{\infty}-adversarial training. Experiments show that while preserving the model's adversarial robustness, compared to PGDAT, IGD decreases the ll_{\infty}-adversarially trained model's error rate to inductive noise and inductive occlusion by up to 60\% and 16.53\%, and to noisy images in Imagenet-C by up to 21.11\%. Moreover, we formally explain why the equality of the model's saliency map can improve such robustness.

View on arXiv
Comments on this paper