ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21105
47
0

AugWard: Augmentation-Aware Representation Learning for Accurate Graph Classification

27 March 2025
Minjun Kim
Jaehyeon Choi
SeungJoo Lee
Jinhong Jung
U. Kang
    OOD
    GNN
ArXivPDFHTML
Abstract

How can we accurately classify graphs? Graph classification is a pivotal task in data mining with applications in social network analysis, web analysis, drug discovery, molecular property prediction, etc. Graph neural networks have achieved the state-of-the-art performance in graph classification, but they consistently struggle with overfitting. To mitigate overfitting, researchers have introduced various representation learning methods utilizing graph augmentation. However, existing methods rely on simplistic use of graph augmentation, which loses augmentation-induced differences and limits the expressiveness of representations.In this paper, we propose AugWard (Augmentation-Aware Training with Graph Distance and Consistency Regularization), a novel graph representation learning framework that carefully considers the diversity introduced by graph augmentation. AugWard applies augmentation-aware training to predict the graph distance between the augmented graph and its original one, aligning the representation difference directly with graph distance at both feature and structure levels. Furthermore, AugWard employs consistency regularization to encourage the classifier to handle richer representations. Experimental results show that AugWard gives the state-of-the-art performance in supervised, semi-supervised graph classification, and transfer learning.

View on arXiv
@article{kim2025_2503.21105,
  title={ AugWard: Augmentation-Aware Representation Learning for Accurate Graph Classification },
  author={ Minjun Kim and Jaehyeon Choi and SeungJoo Lee and Jinhong Jung and U Kang },
  journal={arXiv preprint arXiv:2503.21105},
  year={ 2025 }
}
Comments on this paper