Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators

Understanding uncertainty in Explainable AI (XAI) is crucial for building trust and ensuring reliable decision-making in Machine Learning models. This paper introduces a unified framework for quantifying and interpreting Uncertainty in XAI by defining a general explanation function that captures the propagation of uncertainty from key sources: perturbations in input data and model parameters. By using both analytical and empirical estimates of explanation variance, we provide a systematic means of assessing the impact uncertainty on explanations. We illustrate the approach using a first-order uncertainty propagation as the analytical estimator. In a comprehensive evaluation across heterogeneous datasets, we compare analytical and empirical estimates of uncertainty propagation and evaluate their robustness. Extending previous work on inconsistencies in explanations, our experiments identify XAI methods that do not reliably capture and propagate uncertainty. Our findings underscore the importance of uncertainty-aware explanations in high-stakes applications and offer new insights into the limitations of current XAI methods. The code for the experiments can be found in our repository atthis https URL
View on arXiv@article{chiaburu2025_2504.03736, title={ Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators }, author={ Teodor Chiaburu and Felix Bießmann and Frank Haußer }, journal={arXiv preprint arXiv:2504.03736}, year={ 2025 } }