ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15849
15
0

Focus Where It Matters: Graph Selective State Focused Attention Networks

21 October 2024
Shikhar Vashistha
Neetesh Kumar
ArXivPDFHTML
Abstract

Traditional graph neural networks (GNNs) lack scalability and lose individual node characteristics due to over-smoothing, especially in the case of deeper networks. This results in sub-optimal feature representation, affecting the model's performance on tasks involving dynamically changing graphs. To address this issue, we present Graph Selective States Focused Attention Networks (GSANs) based neural network architecture for graph-structured data. The GSAN is enabled by multi-head masked self-attention (MHMSA) and selective state space modeling (S3M) layers to overcome the limitations of GNNs. In GSAN, the MHMSA allows GSAN to dynamically emphasize crucial node connections, particularly in evolving graph environments. The S3M layer enables the network to adjust dynamically in changing node states and improving predictions of node behavior in varying contexts without needing primary knowledge of the graph structure. Furthermore, the S3M layer enhances the generalization of unseen structures and interprets how node states influence link importance. With this, GSAN effectively outperforms inductive and transductive tasks and overcomes the issues that traditional GNNs experience. To analyze the performance behavior of GSAN, a set of state-of-the-art comparative experiments are conducted on graphs benchmark datasets, including CoraCoraCora, CiteseerCiteseerCiteseer, PubmedPubmedPubmed network citation, and protein−protein−interactionprotein-protein-interactionprotein−protein−interaction datasets, as an outcome, GSAN improved the classification accuracy by 1.56%1.56\%1.56%, 8.94%8.94\%8.94%, 0.37%0.37\%0.37%, and 1.54%1.54\%1.54% on F1−scoreF1-scoreF1−score respectively.

View on arXiv
Comments on this paper