ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.13614
16
3

Adversarial Robustness Certification for Bayesian Neural Networks

23 June 2023
Matthew Wicker
A. Patané
Luca Laurenti
Marta Z. Kwiatkowska
    AAML
ArXivPDFHTML
Abstract

We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations. Given a compact set of input points T⊆RmT \subseteq \mathbb{R}^mT⊆Rm and a set of output points S⊆RnS \subseteq \mathbb{R}^nS⊆Rn, we define two notions of robustness for BNNs in an adversarial setting: probabilistic robustness and decision robustness. Probabilistic robustness is the probability that for all points in TTT the output of a BNN sampled from the posterior is in SSS. On the other hand, decision robustness considers the optimal decision of a BNN and checks if for all points in TTT the optimal decision of the BNN for a given loss function lies within the output set SSS. Although exact computation of these robustness properties is challenging due to the probabilistic and non-convex nature of BNNs, we present a unified computational framework for efficiently and formally bounding them. Our approach is based on weight interval sampling, integration, and bound propagation techniques, and can be applied to BNNs with a large number of parameters, and independently of the (approximate) inference method employed to train the BNN. We evaluate the effectiveness of our methods on various regression and classification tasks, including an industrial regression benchmark, MNIST, traffic sign recognition, and airborne collision avoidance, and demonstrate that our approach enables certification of robustness and uncertainty of BNN predictions.

View on arXiv
Comments on this paper