ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09726
59
0

How Feasible is Augmenting Fake Nodes with Learnable Features as a Counter-strategy against Link Stealing Attacks?

12 March 2025
Mir Imtiaz Mostafiz
Imtiaz Karim
Elisa Bertino
    AAML
ArXivPDFHTML
Abstract

Graph Neural Networks (GNNs) are widely used and deployed for graph-based prediction tasks. However, as good as GNNs are for learning graph data, they also come with the risk of privacy leakage. For instance, an attacker can run carefully crafted queries on the GNNs and, from the responses, can infer the existence of an edge between a pair of nodes. This attack, dubbed as a "link-stealing" attack, can jeopardize the user's privacy by leaking potentially sensitive information. To protect against this attack, we propose an approach called "(N)(N)(N)ode (A)(A)(A)ugmentation for (R)(R)(R)estricting (G)(G)(G)raphs from (I)(I)(I)nsinuating their (S)(S)(S)tructure" (NARGISNARGISNARGIS) and study its feasibility. NARGISNARGISNARGIS is focused on reshaping the graph embedding space so that the posterior from the GNN model will still provide utility for the prediction task but will introduce ambiguity for the link-stealing attackers. To this end, NARGISNARGISNARGIS applies spectral clustering on the given graph to facilitate it being augmented with new nodes -- that have learned features instead of fixed ones. It utilizes tri-level optimization for learning parameters for the GNN model, surrogate attacker model, and our defense model (i.e. learnable node features). We extensively evaluate NARGISNARGISNARGIS on three benchmark citation datasets over eight knowledge availability settings for the attackers. We also evaluate the model fidelity and defense performance on influence-based link inference attacks. Through our studies, we have figured out the best feature of NARGISNARGISNARGIS -- its superior fidelity-privacy performance trade-off in a significant number of cases. We also have discovered in which cases the model needs to be improved, and proposed ways to integrate different schemes to make the model more robust against link stealing attacks.

View on arXiv
@article{mostafiz2025_2503.09726,
  title={ How Feasible is Augmenting Fake Nodes with Learnable Features as a Counter-strategy against Link Stealing Attacks? },
  author={ Mir Imtiaz Mostafiz and Imtiaz Karim and Elisa Bertino },
  journal={arXiv preprint arXiv:2503.09726},
  year={ 2025 }
}
Comments on this paper