ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08240
70
2

Tangentially Aligned Integrated Gradients for User-Friendly Explanations

11 March 2025
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
    FAtt
ArXivPDFHTML
Abstract

Integrated gradients is prevalent within machine learning to address the black-box problem of neural networks. The explanations given by integrated gradients depend on a choice of base-point. The choice of base-point is not a priori obvious and can lead to drastically different explanations. There is a longstanding hypothesis that data lies on a low dimensional Riemannian manifold. The quality of explanations on a manifold can be measured by the extent to which an explanation for a point lies in its tangent space. In this work, we propose that the base-point should be chosen such that it maximises the tangential alignment of the explanation. We formalise the notion of tangential alignment and provide theoretical conditions under which a base-point choice will provide explanations lying in the tangent space. We demonstrate how to approximate the optimal base-point on several well-known image classification datasets. Furthermore, we compare the optimal base-point choice with common base-points and three gradient explainability models.

View on arXiv
@article{simpson2025_2503.08240,
  title={ Tangentially Aligned Integrated Gradients for User-Friendly Explanations },
  author={ Lachlan Simpson and Federico Costanza and Kyle Millar and Adriel Cheng and Cheng-Chew Lim and Hong Gunn Chew },
  journal={arXiv preprint arXiv:2503.08240},
  year={ 2025 }
}
Comments on this paper