32
0

Always Skip Attention

Abstract

We highlight a curious empirical result within modern Vision Transformers (ViTs). Specifically, self-attention catastrophically fails to train unless it is used in conjunction with a skip connection. This is in contrast to other elements of a ViT that continue to exhibit good performance (albeit suboptimal) when skip connections are removed. Further, we show that this critical dependence on skip connections is a relatively new phenomenon, with previous deep architectures (\eg, CNNs) exhibiting good performance in their absence. In this paper, we theoretically characterize that the self-attention mechanism is fundamentally ill-conditioned and is, therefore, uniquely dependent on skip connections for regularization. Additionally, we propose Token Graying -- a simple yet effective complement (to skip connections) that further improves the condition of input tokens. We validate our approach in both supervised and self-supervised training methods.

View on arXiv
@article{ji2025_2505.01996,
  title={ Always Skip Attention },
  author={ Yiping Ji and Hemanth Saratchandran and Peyman Moghaddam and Simon Lucey },
  journal={arXiv preprint arXiv:2505.01996},
  year={ 2025 }
}
Comments on this paper