ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.03482
12
0

AN-GCN: An Anonymous Graph Convolutional Network Defense Against Edge-Perturbing Attack

6 May 2020
Ao Liu
Beibei Li
Tao Li
Pan Zhou
Rui Wang
    AAML
ArXivPDFHTML
Abstract

Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks, such as maliciously inserting or deleting graph edges. However, a theoretical proof of such vulnerability remains a big challenge, and effective defense schemes are still open issues. In this paper, we first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks. Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter against edge-perturbing attacks. Specifically, we present a node localization theorem to demonstrate how the GCN locates nodes during its training phase. In addition, we design a staggered Gaussian noise based node position generator, and devise a spectral graph convolution based discriminator in detecting the generated node positions. Further, we give the optimization of the above generator and discriminator. AN-GCN can classify nodes without taking their position as input. It is demonstrated that the AN-GCN is secure against edge-perturbing attacks in node classification tasks, as AN-GCN classifies nodes without the edge information and thus makes it impossible for attackers to perturb edges anymore. Extensive evaluations demonstrated the effectiveness of the general edge-perturbing attack model in manipulating the classification results of the target nodes. More importantly, the proposed AN-GCN can achieve 82.7% in node classification accuracy without the edge-reading permission, which outperforms the state-of-the-art GCN.

View on arXiv
Comments on this paper