ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.11997
11
32

Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees

19 May 2023
Faisal Hamman
Erfaun Noorani
Saumitra Mishra
Daniele Magazzeni
Sanghamitra Dutta
    OOD
    AAML
ArXivPDFHTML
Abstract

There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly. Towards finding robust counterfactuals, existing literature often assumes that the original model mmm and the new model MMM are bounded in the parameter space, i.e., ∥Params(M)−Params(m)∥<Δ\|\text{Params}(M){-}\text{Params}(m)\|{<}\Delta∥Params(M)−Params(m)∥<Δ. However, models can often change significantly in the parameter space with little to no change in their predictions or accuracy on the given dataset. In this work, we introduce a mathematical abstraction termed naturally-occurring\textit{naturally-occurring}naturally-occurring model change, which allows for arbitrary changes in the parameter space such that the change in predictions on points that lie on the data manifold is limited. Next, we propose a measure -- that we call Stability\textit{Stability}Stability -- to quantify the robustness of counterfactuals to potential model changes for differentiable models, e.g., neural networks. Our main contribution is to show that counterfactuals with sufficiently high value of Stability\textit{Stability}Stability as defined by our measure will remain valid after potential naturally-occurring\textit{naturally-occurring}naturally-occurring model changes with high probability (leveraging concentration bounds for Lipschitz function of independent Gaussians). Since our quantification depends on the local Lipschitz constant around a data point which is not always available, we also examine practical relaxations of our proposed measure and demonstrate experimentally how they can be incorporated to find robust counterfactuals for neural networks that are close, realistic, and remain valid after potential model changes. This work also has interesting connections with model multiplicity, also known as, the Rashomon effect.

View on arXiv
Comments on this paper