ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.17224
31
0

Uncertainty Quantification for Gradient-based Explanations in Neural Networks

25 March 2024
Mihir Mulye
Matias Valdenegro-Toro
    UQCV
    FAtt
ArXivPDFHTML
Abstract

Explanation methods help understand the reasons for a model's prediction. These methods are increasingly involved in model debugging, performance optimization, and gaining insights into the workings of a model. With such critical applications of these methods, it is imperative to measure the uncertainty associated with the explanations generated by these methods. In this paper, we propose a pipeline to ascertain the explanation uncertainty of neural networks by combining uncertainty estimation methods and explanation methods. We use this pipeline to produce explanation distributions for the CIFAR-10, FER+, and California Housing datasets. By computing the coefficient of variation of these distributions, we evaluate the confidence in the explanation and determine that the explanations generated using Guided Backpropagation have low uncertainty associated with them. Additionally, we compute modified pixel insertion/deletion metrics to evaluate the quality of the generated explanations.

View on arXiv
@article{mulye2025_2403.17224,
  title={ Uncertainty Quantification for Gradient-based Explanations in Neural Networks },
  author={ Mihir Mulye and Matias Valdenegro-Toro },
  journal={arXiv preprint arXiv:2403.17224},
  year={ 2025 }
}
Comments on this paper