ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.10851
8
0

What Do GNNs Actually Learn? Towards Understanding their Representations

21 April 2023
Giannis Nikolentzos
Michail Chatzianastasis
Michalis Vazirgiannis
    GNN
    AI4CE
ArXivPDFHTML
Abstract

In recent years, graph neural networks (GNNs) have achieved great success in the field of graph representation learning. Although prior work has shed light into the expressiveness of those models (\ie whether they can distinguish pairs of non-isomorphic graphs), it is still not clear what structural information is encoded into the node representations that are learned by those models. In this paper, we investigate which properties of graphs are captured purely by these models, when no node attributes are available. Specifically, we study four popular GNN models, and we show that two of them embed all nodes into the same feature vector, while the other two models generate representations that are related to the number of walks over the input graph. Strikingly, structurally dissimilar nodes can have similar representations at some layer k>1k>1k>1, if they have the same number of walks of length kkk. We empirically verify our theoretical findings on real datasets.

View on arXiv
Comments on this paper