ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12698
  4. Cited By
Leveraging Latent Features for Local Explanations

Leveraging Latent Features for Local Explanations

29 May 2019
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
    FAtt
ArXivPDFHTML

Papers citing "Leveraging Latent Features for Local Explanations"

16 / 16 papers shown
Title
CELL your Model: Contrastive Explanations for Large Language Models
CELL your Model: Contrastive Explanations for Large Language Models
Ronny Luss
Erik Miehling
Amit Dhurandhar
43
0
0
17 Jun 2024
Explainable AI for clinical risk prediction: a survey of concepts,
  methods, and modalities
Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities
Munib Mesinovic
Peter Watkinson
Ting Zhu
FaML
19
3
0
16 Aug 2023
Harmonizing Feature Attributions Across Deep Learning Architectures:
  Enhancing Interpretability and Consistency
Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency
Md Abdul Kadir
G. Addluri
Daniel Sonntag
FAtt
11
1
0
05 Jul 2023
Even if Explanations: Prior Work, Desiderata & Benchmarks for
  Semi-Factual XAI
Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI
Saugat Aryal
Markt. Keane
18
21
0
27 Jan 2023
Truthful Meta-Explanations for Local Interpretability of Machine
  Learning Models
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
16
3
0
07 Dec 2022
On the Safety of Interpretable Machine Learning: A Maximum Deviation
  Approach
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Dennis L. Wei
Rahul Nair
Amit Dhurandhar
Kush R. Varshney
Elizabeth M. Daly
Moninder Singh
FAtt
17
9
0
02 Nov 2022
Beyond Model Interpretability: On the Faithfulness and Adversarial
  Robustness of Contrastive Textual Explanations
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual Explanations
Julia El Zini
M. Awad
AAML
18
2
0
17 Oct 2022
FIND:Explainable Framework for Meta-learning
FIND:Explainable Framework for Meta-learning
Xinyue Shao
Hongzhi Wang
Xiao-Wen Zhu
Feng Xiong
FedML
14
2
0
20 May 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
62
8
0
18 May 2022
Analogies and Feature Attributions for Model Agnostic Explanation of
  Similarity Learners
Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
K. Ramamurthy
Amit Dhurandhar
Dennis L. Wei
Zaid Bin Tariq
FAtt
25
3
0
02 Feb 2022
AI Explainability 360: Impact and Design
AI Explainability 360: Impact and Design
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
12
14
0
24 Sep 2021
Let the CAT out of the bag: Contrastive Attributed explanations for Text
Let the CAT out of the bag: Contrastive Attributed explanations for Text
Saneem A. Chemmengath
A. Azad
Ronny Luss
Amit Dhurandhar
FAtt
26
10
0
16 Sep 2021
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun
Been Kim
Chun-Liang Li
Brendan Jou
B. Eoff
Rosalind W. Picard
AAML
17
53
0
31 May 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
218
0
25 Feb 2021
A Primer on Contrastive Pretraining in Language Processing: Methods,
  Lessons Learned and Perspectives
A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives
Nils Rethmeier
Isabelle Augenstein
SSL
VLM
85
90
0
25 Feb 2021
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,235
0
24 Jun 2017
1