ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.15605
64
0

GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers

23 November 2024
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
ArXivPDFHTML
Abstract

Understanding deep models is crucial for deploying them in safety-critical applications. We introduce GIFT, a framework for deriving post-hoc, global, interpretable, and faithful textual explanations for vision classifiers. GIFT starts from local faithful visual counterfactual explanations and employs (vision) language models to translate those into global textual explanations. Crucially, GIFT provides a verification stage measuring the causal effect of the proposed explanations on the classifier decision. Through experiments across diverse datasets, including CLEVR, CelebA, and BDD, we demonstrate that GIFT effectively reveals meaningful insights, uncovering tasks, concepts, and biases used by deep vision classifiers. The framework is released atthis https URL.

View on arXiv
@article{zablocki2025_2411.15605,
  title={ GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers },
  author={ Éloi Zablocki and Valentin Gerard and Amaia Cardiel and Eric Gaussier and Matthieu Cord and Eduardo Valle },
  journal={arXiv preprint arXiv:2411.15605},
  year={ 2025 }
}
Comments on this paper