19
0

Graph Transformers Dream of Electric Flow

Abstract

We show theoretically and empirically that the linear Transformer, when applied to graph data, can implement algorithms that solve canonical problems such as electric flow and eigenvector decomposition. The Transformer has access to information on the input graph only via the graph's incidence matrix. We present explicit weight configurations for implementing each algorithm, and we bound the constructed Transformers' errors by the errors of the underlying algorithms. Our theoretical findings are corroborated by experiments on synthetic data. Additionally, on a real-world molecular regression task, we observe that the linear Transformer is capable of learning a more effective positional encoding than the default one based on Laplacian eigenvectors. Our work is an initial step towards elucidating the inner-workings of the Transformer for graph data. Code is available atthis https URL

View on arXiv
@article{cheng2025_2410.16699,
  title={ Graph Transformers Dream of Electric Flow },
  author={ Xiang Cheng and Lawrence Carin and Suvrit Sra },
  journal={arXiv preprint arXiv:2410.16699},
  year={ 2025 }
}
Comments on this paper