Integrated gradients is prevalent within machine learning to address the black-box problem of neural networks. The explanations given by integrated gradients depend on a choice of base-point. The choice of base-point is not a priori obvious and can lead to drastically different explanations. There is a longstanding hypothesis that data lies on a low dimensional Riemannian manifold. The quality of explanations on a manifold can be measured by the extent to which an explanation for a point lies in its tangent space. In this work, we propose that the base-point should be chosen such that it maximises the tangential alignment of the explanation. We formalise the notion of tangential alignment and provide theoretical conditions under which a base-point choice will provide explanations lying in the tangent space. We demonstrate how to approximate the optimal base-point on several well-known image classification datasets. Furthermore, we compare the optimal base-point choice with common base-points and three gradient explainability models.
View on arXiv@article{simpson2025_2503.08240, title={ Tangentially Aligned Integrated Gradients for User-Friendly Explanations }, author={ Lachlan Simpson and Federico Costanza and Kyle Millar and Adriel Cheng and Cheng-Chew Lim and Hong Gunn Chew }, journal={arXiv preprint arXiv:2503.08240}, year={ 2025 } }