31
1

Contextualized Messages Boost Graph Representations

Abstract

Graph neural networks (GNNs) have gained significant attention in recent years for their ability to process data that may be represented as graphs. This has prompted several studies to explore their representational capability based on the graph isomorphism task. Notably, these works inherently assume a countable node feature representation, potentially limiting their applicability. Interestingly, only a few study GNNs with uncountable node feature representation. In the paper, a new perspective on the representational capability of GNNs is investigated across all levels\unicodex2014\unicode{x2014}node-level, neighborhood-level, and graph-level\unicodex2014\unicode{x2014}when the space of node feature representation is uncountable. Specifically, the injective and metric requirements of previous works are softly relaxed by employing a pseudometric distance on the space of input to create a soft-injective function such that distinct inputs may produce similar outputs if and only if the pseudometric deems the inputs to be sufficiently similar on some representation. As a consequence, a simple and computationally efficient soft-isomorphic relational graph convolution network (SIR-GCN) that emphasizes the contextualized transformation of neighborhood feature representations via anisotropic and dynamic message functions is proposed. Furthermore, a mathematical discussion on the relationship between SIR-GCN and key GNNs in literature is laid out to put the contribution into context, establishing SIR-GCN as a generalization of classical GNN methodologies. To close, experiments on synthetic and benchmark datasets demonstrate the relative superiority of SIR-GCN, outperforming comparable models in node and graph property prediction tasks.

View on arXiv
@article{lim2025_2403.12529,
  title={ Contextualized Messages Boost Graph Representations },
  author={ Brian Godwin Lim and Galvin Brice Sy Lim and Renzo Roel Tan and Kazushi Ikeda },
  journal={arXiv preprint arXiv:2403.12529},
  year={ 2025 }
}
Comments on this paper