p-Laplacian Transformer

-Laplacian regularization, rooted in graph and image signal processing, introduces a parameter to control the regularization effect on these data. Smaller values of promote sparsity and interpretability, while larger values encourage smoother solutions. In this paper, we first show that the self-attention mechanism obtains the minimal Laplacian regularization () and encourages the smoothness in the architecture. However, the smoothness is not suitable for the heterophilic structure of self-attention in transformers where attention weights between tokens that are in close proximity and non-close ones are assigned indistinguishably. From that insight, we then propose a novel class of transformers, namely the -Laplacian Transformer (p-LaT), which leverages -Laplacian regularization framework to harness the heterophilic features within self-attention layers. In particular, low values will effectively assign higher attention weights to tokens that are in close proximity to the current token being processed. We empirically demonstrate the advantages of p-LaT over the baseline transformers on a wide range of benchmark datasets.
View on arXiv