171
v1v2 (latest)

Rethinking Attention: Polynomial Alternatives to Softmax in Transformers

Main:10 Pages
7 Figures
Bibliography:2 Pages
6 Tables
Appendix:8 Pages
Abstract

This paper questions whether the strong performance of softmax attention in transformers stems from producing a probability distribution over inputs. Instead, we argue that softmax's effectiveness lies in its implicit regularization of the Frobenius norm of the attention matrix, which stabilizes training. Motivated by this, we explore alternative activations, specifically polynomials, that achieve a similar regularization effect. Our theoretical analysis shows that certain polynomials can serve as effective substitutes for softmax, achieving strong performance across transformer applications despite violating softmax's typical properties of positivity, normalization, and sparsity. Extensive experiments support these findings, offering a new perspective on attention mechanisms.

View on arXiv
Comments on this paper