31
1

The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective

Abstract

Flatness of the loss surface not only correlates positively with generalization, but is also related to adversarial robustness since perturbations of inputs relate non-linearly to perturbations of weights. In this paper, we empirically analyze the relation between adversarial examples and relative flatness with respect to the parameters of one layer. We observe a peculiar property of adversarial examples in the context of relative flatness: during an iterative first-order white-box attack, the flatness of the loss surface measured around the adversarial example first becomes sharper until the label is flipped, but if we keep the attack running, it runs into a flat uncanny valley where the label remains flipped. In extensive experiments, we observe this phenomenon across various model architectures and datasets, even for adversarially trained models. Our results also extend to large language models (LLMs), but due to the discrete nature of the input space and comparatively weak attacks, adversarial examples rarely reach truly flat regions. Most importantly, this phenomenon shows that flatness alone cannot explain adversarial robustness unless we can also guarantee the behavior of the function around the examples. We, therefore theoretically connect relative flatness to adversarial robustness by bounding the third derivative of the loss surface, underlining the need for flatness in combination with a low global Lipschitz constant for a robust model.

View on arXiv
@article{walter2025_2405.16918,
  title={ The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective },
  author={ Nils Philipp Walter and Linara Adilova and Jilles Vreeken and Michael Kamp },
  journal={arXiv preprint arXiv:2405.16918},
  year={ 2025 }
}
Comments on this paper