25
1

Homomorphism Counts as Structural Encodings for Graph Learning

Abstract

Graph Transformers are popular neural networks that extend the well-known Transformer architecture to the graph domain. These architectures operate by applying self-attention on graph nodes and incorporating graph structure through the use of positional encodings (e.g., Laplacian positional encoding) or structural encodings (e.g., random-walk structural encoding). The quality of such encodings is critical, since they provide the necessary graph inductive biases\textit{graph inductive biases} to condition the model on graph structure. In this work, we propose motif structural encoding\textit{motif structural encoding} (MoSE) as a flexible and powerful structural encoding framework based on counting graph homomorphisms. Theoretically, we compare the expressive power of MoSE to random-walk structural encoding and relate both encodings to the expressive power of standard message passing neural networks. Empirically, we observe that MoSE outperforms other well-known positional and structural encodings across a range of architectures, and it achieves state-of-the-art performance on a widely studied molecular property prediction dataset.

View on arXiv
@article{bao2025_2410.18676,
  title={ Homomorphism Counts as Structural Encodings for Graph Learning },
  author={ Linus Bao and Emily Jin and Michael Bronstein and İsmail İlkan Ceylan and Matthias Lanzinger },
  journal={arXiv preprint arXiv:2410.18676},
  year={ 2025 }
}
Comments on this paper