69
0

p-Laplacian Transformer

Abstract

pp-Laplacian regularization, rooted in graph and image signal processing, introduces a parameter pp to control the regularization effect on these data. Smaller values of pp promote sparsity and interpretability, while larger values encourage smoother solutions. In this paper, we first show that the self-attention mechanism obtains the minimal Laplacian regularization (p=2p=2) and encourages the smoothness in the architecture. However, the smoothness is not suitable for the heterophilic structure of self-attention in transformers where attention weights between tokens that are in close proximity and non-close ones are assigned indistinguishably. From that insight, we then propose a novel class of transformers, namely the pp-Laplacian Transformer (p-LaT), which leverages pp-Laplacian regularization framework to harness the heterophilic features within self-attention layers. In particular, low pp values will effectively assign higher attention weights to tokens that are in close proximity to the current token being processed. We empirically demonstrate the advantages of p-LaT over the baseline transformers on a wide range of benchmark datasets.

View on arXiv
Comments on this paper