16
2

Integrated Gradient attribution for Gaussian Processes with non-Gaussian likelihoods

Abstract

Gaussian Process (GP) models are a powerful tool in probabilistic machine learning with a solid theoretical foundation. Thanks to current advances, modeling complex data with GPs is becoming increasingly feasible, which makes them an interesting alternative to deep learning and related approaches. As the latter are getting more and more influential on society, the need for making a model's decision making process transparent and explainable is now a major focus of research. A major direction in interpretable machine learning is the use of gradient-based approaches, such as Integrated Gradients, to quantify feature attribution, locally for a given datapoint of interest. Since GPs and the behavior of their partial derivatives are well studied and straightforward to derive, studying gradient-based explainability for GPs is a promising direction of research. Unfortunately, partial derivatives for GPs become less trivial to handle when dealing with non-Gaussian target data as in classification or more sophisticated regression problems. This paper therefore proposes an approach for applying Integrated Gradient-based explainability to non-Gaussian GP models, offering both analytical and approximate solutions. This extends gradient-based explainability to probabilistic models with complex likelihoods to extend their practical applicability.

View on arXiv
@article{seitz2025_2205.12797,
  title={ Integrated Gradient attribution for Gaussian Processes with non-Gaussian likelihoods },
  author={ Sarem Seitz },
  journal={arXiv preprint arXiv:2205.12797},
  year={ 2025 }
}
Comments on this paper