ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22998
36
0

AuditVotes: A Framework Towards More Deployable Certified Robustness for Graph Neural Networks

29 March 2025
Y. Lai
Yulin Zhu
Y. Sun
Y. Wu
Bin Xiao
Gaolei Li
Jianhua Li
Kai Zhou
    AAML
ArXivPDFHTML
Abstract

Despite advancements in Graph Neural Networks (GNNs), adaptive attacks continue to challenge their robustness. Certified robustness based on randomized smoothing has emerged as a promising solution, offering provable guarantees that a model's predictions remain stable under adversarial perturbations within a specified range. However, existing methods face a critical trade-off between accuracy and robustness, as achieving stronger robustness requires introducing greater noise into the input graph. This excessive randomization degrades data quality and disrupts prediction consistency, limiting the practical deployment of certifiably robust GNNs in real-world scenarios where both accuracy and robustness are essential. To address this challenge, we propose \textbf{AuditVotes}, the first framework to achieve both high clean accuracy and certifiably robust accuracy for GNNs. It integrates randomized smoothing with two key components, \underline{au}gmentation and con\underline{dit}ional smoothing, aiming to improve data quality and prediction consistency. The augmentation, acting as a pre-processing step, de-noises the randomized graph, significantly improving data quality and clean accuracy. The conditional smoothing, serving as a post-processing step, employs a filtering function to selectively count votes, thereby filtering low-quality predictions and improving voting consistency. Extensive experimental results demonstrate that AuditVotes significantly enhances clean accuracy, certified robustness, and empirical robustness while maintaining high computational efficiency. Notably, compared to baseline randomized smoothing, AuditVotes improves clean accuracy by 437.1%437.1\%437.1% and certified accuracy by 409.3%409.3\%409.3% when the attacker can arbitrarily insert 202020 edges on the Cora-ML datasets, representing a substantial step toward deploying certifiably robust GNNs in real-world applications.

View on arXiv
@article{lai2025_2503.22998,
  title={ AuditVotes: A Framework Towards More Deployable Certified Robustness for Graph Neural Networks },
  author={ Yuni Lai and Yulin Zhu and Yixuan Sun and Yulun Wu and Bin Xiao and Gaolei Li and Jianhua Li and Kai Zhou },
  journal={arXiv preprint arXiv:2503.22998},
  year={ 2025 }
}
Comments on this paper