204

Your Attention Matters: to Improve Model Robustness to Noise and Spurious Correlations

Main:7 Pages
11 Figures
Bibliography:3 Pages
7 Tables
Appendix:4 Pages
Abstract

Self-attention mechanisms are foundational to Transformer architectures, supporting their impressive success in a wide range of tasks. While there are many self-attention variants, their robustness to noise and spurious correlations has not been well studied. This study evaluates Softmax, Sigmoid, Linear, Doubly Stochastic, and Cosine attention within Vision Transformers under different data corruption scenarios. Through testing across the CIFAR-10, CIFAR-100, and Imagenette datasets, we show that Doubly Stochastic attention is the most robust. It consistently outperformed the next best mechanism by 0.1%5.1%0.1\%-5.1\% when training data, or both training and testing data, were corrupted. Our findings inform self-attention selection in contexts with imperfect data. The code used is available at this https URL.

View on arXiv
Comments on this paper