ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.02065
  4. Cited By
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
v1v2v3 (latest)

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

4 October 2019
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
    FAttAAML
ArXiv (abs)PDFHTML

Papers citing "Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods"

41 / 41 papers shown
Generating Part-Based Global Explanations Via Correspondence
Generating Part-Based Global Explanations Via Correspondence
Kunal Rathore
Prasad Tadepalli
268
0
0
18 Sep 2025
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
Amir Mehrpanah
Matteo Gamba
Kevin Smith
Hossein Azizpour
FAtt
198
0
0
14 Aug 2025
Multi-criteria Rank-based Aggregation for Explainable AI
Multi-criteria Rank-based Aggregation for Explainable AI
Sujoy Chatterjee
Everton Romanzini Colombo
Marcos Medeiros Raimundo
XAI
291
2
0
30 May 2025
Fixed Point Explainability
Fixed Point Explainability
Emanuele La Malfa
Jon Vadillo
Marco Molinari
Michael Wooldridge
512
0
0
18 May 2025
Self-Explaining Neural Networks for Business Process Monitoring
Self-Explaining Neural Networks for Business Process Monitoring
Shahaf Bassan
Shlomit Gur
Sergey Zeltyn
Konstantinos Mavrogiorgos
Ron Eliav
Dimosthenis Kyriazis
331
7
0
23 Mar 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAttXAI
462
1
0
11 Feb 2025
Interpretability in Symbolic Regression: a benchmark of Explanatory
  Methods using the Feynman data set
Interpretability in Symbolic Regression: a benchmark of Explanatory Methods using the Feynman data set
Guilherme Seidyo Imai Aldeia
Fabrício Olivetti de França
363
16
0
08 Apr 2024
The Role of Syntactic Span Preferences in Post-Hoc Explanation
  Disagreement
The Role of Syntactic Span Preferences in Post-Hoc Explanation Disagreement
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
257
3
0
28 Mar 2024
Towards Faithful Explanations for Text Classification with Robustness
  Improvement and Explanation Guided Training
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training
Dongfang Li
Baotian Hu
Qingcai Chen
Shan He
363
9
0
29 Dec 2023
Clairvoyance: A Pipeline Toolkit for Medical Time Series
Clairvoyance: A Pipeline Toolkit for Medical Time SeriesInternational Conference on Learning Representations (ICLR), 2023
Daniel Jarrett
Chang Jo Kim
Ioana Bica
Zhaozhi Qian
A. Ercole
M. Schaar
AI4TS
348
42
0
28 Oct 2023
How Well Do Feature-Additive Explainers Explain Feature-Additive
  Predictors?
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?
Zachariah Carmichael
Walter J. Scheirer
FAtt
309
9
0
27 Oct 2023
A Uniform Language to Explain Decision Trees
A Uniform Language to Explain Decision TreesInternational Conference on Principles of Knowledge Representation and Reasoning (KR), 2023
Marcelo Arenas
Pablo Barceló
Diego Bustamante
Jose Caraball
Bernardo Subercaseaux
297
5
0
18 Oct 2023
Dynamic Top-k Estimation Consolidates Disagreement between Feature
  Attribution Methods
Dynamic Top-k Estimation Consolidates Disagreement between Feature Attribution MethodsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
FAtt
291
4
0
09 Oct 2023
Pixel-Grounded Prototypical Part Networks
Pixel-Grounded Prototypical Part NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Zachariah Carmichael
Suhas Lohit
A. Cherian
Michael Jeffrey Jones
Walter J. Scheirer
348
20
0
25 Sep 2023
Formally Explaining Neural Networks within Reactive Systems
Formally Explaining Neural Networks within Reactive SystemsFormal Methods in Computer-Aided Design (FMCAD), 2023
Shahaf Bassan
Guy Amir
Davide Corsi
Idan Refaeli
Guy Katz
AAML
454
27
0
31 Jul 2023
Faithfulness Tests for Natural Language Explanations
Faithfulness Tests for Natural Language ExplanationsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Pepa Atanasova
Oana-Maria Camburu
Christina Lioma
Thomas Lukasiewicz
J. Simonsen
Isabelle Augenstein
FAtt
569
103
0
29 May 2023
On Computing Probabilistic Abductive Explanations
On Computing Probabilistic Abductive ExplanationsInternational Journal of Approximate Reasoning (IJAR), 2022
Yacine Izza
Xuanxiang Huang
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAttXAI
273
30
0
12 Dec 2022
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural
  Networks
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural NetworksInternational Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS), 2022
Shahaf Bassan
Guy Katz
FAttAAML
395
55
0
25 Oct 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRMXAI
563
64
0
24 Oct 2022
On Computing Relevant Features for Explaining NBCs
On Computing Relevant Features for Explaining NBCs
Yacine Izza
Sasha Rubin
250
5
0
11 Jul 2022
Eliminating The Impossible, Whatever Remains Must Be True
Eliminating The Impossible, Whatever Remains Must Be TrueAAAI Conference on Artificial Intelligence (AAAI), 2022
Jinqiang Yu
Alexey Ignatiev
Peter Stuckey
Nina Narodytska
Sasha Rubin
460
34
0
20 Jun 2022
On Tackling Explanation Redundancy in Decision Trees
On Tackling Explanation Redundancy in Decision TreesJournal of Artificial Intelligence Research (JAIR), 2022
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
356
77
0
20 May 2022
Towards a consistent interpretation of AIOps models
Towards a consistent interpretation of AIOps modelsACM Transactions on Software Engineering and Methodology (TOSEM), 2022
Yingzhe Lyu
Gopi Krishnan Rajbahadur
Dayi Lin
Boyuan Chen
Zhen Ming
Z. Jiang
AI4CE
362
28
0
04 Feb 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial ContextsConference on Fairness, Accountability and Transparency (FAccT), 2022
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
439
100
0
25 Jan 2022
"Will You Find These Shortcuts?" A Protocol for Evaluating the
  Faithfulness of Input Salience Methods for Text Classification
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
426
85
0
14 Nov 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language ExplanationsInternational Conference on Machine Learning (ICML), 2021
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
431
40
0
25 Jun 2021
A Framework for Evaluating Post Hoc Feature-Additive Explainers
A Framework for Evaluating Post Hoc Feature-Additive Explainers
Zachariah Carmichael
Walter J. Scheirer
FAtt
237
5
0
15 Jun 2021
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Prompting Contrastive Explanations for Commonsense Reasoning TasksFindings (Findings), 2021
Bhargavi Paranjape
Julian Michael
Marjan Ghazvininejad
Luke Zettlemoyer
Hannaneh Hajishirzi
ReLMLRM
302
77
0
12 Jun 2021
Evaluating Local Explanations using White-box Models
Evaluating Local Explanations using White-box Models
Amir Hossein Akhavan Rahnama
Judith Butepage
Pierre Geurts
Henrik Bostrom
FAtt
308
0
0
04 Jun 2021
On Efficiently Explaining Graph-Based Classifiers
On Efficiently Explaining Graph-Based ClassifiersInternational Conference on Principles of Knowledge Representation and Reasoning (KR), 2021
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
306
54
0
02 Jun 2021
Efficient Explanations With Relevant Sets
Efficient Explanations With Relevant Sets
Yacine Izza
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
189
17
0
01 Jun 2021
SAT-Based Rigorous Explanations for Decision Lists
SAT-Based Rigorous Explanations for Decision ListsInternational Conference on Theory and Applications of Satisfiability Testing (SAT), 2021
Alexey Ignatiev
Sasha Rubin
XAI
254
55
0
14 May 2021
To what extent do human explanations of model behavior align with actual
  model behavior?
To what extent do human explanations of model behavior align with actual model behavior?BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020
Grusha Prasad
Yixin Nie
Joey Tianyi Zhou
Robin Jia
Douwe Kiela
Adina Williams
278
29
0
24 Dec 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
351
28
0
07 Oct 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAIFAtt
443
32
0
04 Oct 2020
Can We Trust Your Explanations? Sanity Checks for Interpreters in
  Android Malware Analysis
Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis
Ming Fan
Wenying Wei
Xiaofei Xie
Yang Liu
X. Guan
Ting Liu
FAttAAML
327
47
0
13 Aug 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Reliable Post hoc Explanations: Modeling Uncertainty in ExplainabilityNeural Information Processing Systems (NeurIPS), 2020
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
685
217
0
11 Aug 2020
NILE : Natural Language Inference with Faithful Natural Language
  Explanations
NILE : Natural Language Inference with Faithful Natural Language ExplanationsAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Sawan Kumar
Partha P. Talukdar
XAILRM
355
170
0
25 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model ExplanationsInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
384
287
0
01 May 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?Annual Meeting of the Association for Computational Linguistics (ACL), 2020
Alon Jacovi
Yoav Goldberg
XAI
684
788
0
07 Apr 2020
Towards a Unified Evaluation of Explanation Methods without Ground Truth
Towards a Unified Evaluation of Explanation Methods without Ground Truth
Hao Zhang
Jiayi Chen
Haotian Xue
Quanshi Zhang
XAI
332
10
0
20 Nov 2019
1
Page 1 of 1