265

Are Attention Networks More Robust? Towards Exact Robustness Verification for Attention Networks

International Conference on Computer Safety, Reliability, and Security (SAFECOMP), 2022
Abstract

As an emerging type of Neural Networks (NNs), Attention Networks (ATNs) such as Transformers have been shown effective, in terms of accuracy, in many applications. This paper further considers their robustness. More specifically, we are curious about their maximum resilience against local input perturbations compared to the more conventional Multi-Layer Perceptrons (MLPs). Thus, we formulate the verification task into an optimization problem, from which exact robustness values can be obtained. One major challenge, however, is the non-convexity and non-linearity of NNs. While the existing literature has handled the challenge to some extent with methods such as Branch-and-Bound, the additional level of difficulty introduced by the quadratic and exponential functions in the ATNs has not been tackled. Our work reduces this gap by focusing on sparsemax-based ATNs, encoding them into Mixed Integer Quadratically Constrained Programming problems, and proposing two powerful heuristics for a speedup of one order of magnitude. Finally, we train and evaluate several sparsemax-based ATNs and similar-sized ReLU-based MLPs for a lane departure warning task and show that the former is surprisingly less robust despite generally higher accuracy.

View on arXiv
Comments on this paper