ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08758
  4. Cited By
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
v1v2 (latest)

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

Conference on Fairness, Accountability and Transparency (FAccT), 2021
21 January 2021
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
ArXiv (abs)PDFHTML

Papers citing "How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations"

50 / 59 papers shown
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
Aline Mangold
Juliane Zietz
Susanne Weinhold
Sebastian Pannasch
XAIELM
195
1
0
14 Oct 2025
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
Maria J. P. Peixoto
Akriti Pandey
Ahsan Zaman
Peter R. Lewis
134
2
0
14 Aug 2025
Beyond Technocratic XAI: The Who, What & How in Explanation Design
Beyond Technocratic XAI: The Who, What & How in Explanation Design
Ruchira Dhar
Stephanie Brandl
Ninell Oldenburg
Anders Søgaard
216
0
0
12 Aug 2025
Evaluating explainable AI for deep learning-based network intrusion detection system alert classification
Evaluating explainable AI for deep learning-based network intrusion detection system alert classificationInternational Conference on Information Systems Security and Privacy (ICISSP), 2025
Rajesh Kalakoti
Risto Vaarandi
Hayretdin Bahsi
Sven Nõmm
AAML
207
13
0
09 Jun 2025
Unveiling the Black Box: A Multi-Layer Framework for Explaining Reinforcement Learning-Based Cyber Agents
Unveiling the Black Box: A Multi-Layer Framework for Explaining Reinforcement Learning-Based Cyber Agents
Diksha Goel
Kristen Moore
Jeff Wang
Minjune Kim
Thanh Thi Nguyen
AAML
203
4
0
16 May 2025
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc MethodsConference on Fairness, Accountability and Transparency (FAccT), 2025
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
406
1
0
02 May 2025
In defence of post-hoc explanations in medical AI
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
401
1
0
29 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Jiayi Zhang
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
387
1
0
11 Apr 2025
A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacity
A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacityEthics and Information Technology (EIT), 2025
Joshua Hatherley
AAML
335
6
0
07 Apr 2025
Creating Healthy Friction: Determining Stakeholder Requirements of Job
  Recommendation Explanations
Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
Roan Schellingerhout
Francesco Barile
Nava Tintarev
286
1
0
24 Sep 2024
Explainable AI needs formalization
Explainable AI needs formalization
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
Danny Panknin
XAIELMLRM
533
4
0
22 Sep 2024
Explainable AI: Definition and attributes of a good explanation for
  health AI
Explainable AI: Definition and attributes of a good explanation for health AIAI and Ethics (AI & Ethics), 2024
E. Kyrimi
S. McLachlan
Jared M Wohlgemut
Zane B Perkins
David A. Lagnado
W. Marsh
the ExAIDSS Expert Group
XAI
282
1
0
09 Sep 2024
Evaluating Fairness in Transaction Fraud Models: Fairness Metrics, Bias
  Audits, and Challenges
Evaluating Fairness in Transaction Fraud Models: Fairness Metrics, Bias Audits, and ChallengesInternational Conference on AI in Finance (ICAF), 2024
Parameswaran Kamalaruban
Yulu Pi
Stuart Burrell
Eleanor Drage
Piotr Skalski
Jason Wong
David Sutton
289
4
0
06 Sep 2024
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Supriya Manna
Dionis Barcari
680
4
0
31 Jul 2024
Auditing Local Explanations is Hard
Auditing Local Explanations is Hard
Robi Bhattacharjee
U. V. Luxburg
LRMMLAUFAtt
326
8
0
18 Jul 2024
Applications of Explainable artificial intelligence in Earth system
  science
Applications of Explainable artificial intelligence in Earth system science
Feini Huang
Shijie Jiang
Lu Li
Yongkun Zhang
Ye Zhang
Ruqing Zhang
Qingliang Li
Danxi Li
Wei Shangguan
Yongjiu Dai
284
8
0
12 Jun 2024
Explainable AI improves task performance in human-AI collaboration
Explainable AI improves task performance in human-AI collaboration
J. Senoner
Simon Schallmoser
Bernhard Kratzwald
Stefan Feuerriegel
Torbjørn H. Netland
340
61
0
12 Jun 2024
A Sim2Real Approach for Identifying Task-Relevant Properties in
  Interpretable Machine Learning
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
382
1
0
31 May 2024
False Sense of Security in Explainable Artificial Intelligence (XAI)
False Sense of Security in Explainable Artificial Intelligence (XAI)
N. C. Chung
Hongkyou Chung
Hearim Lee
L. Brocki
Hongbeom Chung
George C. Dyer
523
4
0
06 May 2024
The Role of Syntactic Span Preferences in Post-Hoc Explanation
  Disagreement
The Role of Syntactic Span Preferences in Post-Hoc Explanation Disagreement
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
252
3
0
28 Mar 2024
How Human-Centered Explainable AI Interface Are Designed and Evaluated:
  A Systematic Survey
How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey
Thu Nguyen
Alessandro Canossa
Jichen Zhu
212
15
0
21 Mar 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
303
19
0
19 Mar 2024
Trustworthy AI: Deciding What to Decide
Trustworthy AI: Deciding What to Decide
Caesar Wu
Yuan-Fang Li
Jian Li
Jingjing Xu
Pascal Bouvry
211
3
0
21 Nov 2023
Deep Natural Language Feature Learning for Interpretable Prediction
Deep Natural Language Feature Learning for Interpretable PredictionConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Felipe Urrutia
Cristian Buc
Valentin Barriere
399
3
0
09 Nov 2023
Dynamic Top-k Estimation Consolidates Disagreement between Feature
  Attribution Methods
Dynamic Top-k Estimation Consolidates Disagreement between Feature Attribution MethodsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Jonathan Kamp
Lisa Beinborn
Antske Fokkens
FAtt
285
3
0
09 Oct 2023
Pixel-Grounded Prototypical Part Networks
Pixel-Grounded Prototypical Part NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Zachariah Carmichael
Suhas Lohit
A. Cherian
Michael Jeffrey Jones
Walter J. Scheirer
334
20
0
25 Sep 2023
Is Task-Agnostic Explainable AI a Myth?
Is Task-Agnostic Explainable AI a Myth?
Alicja Chaszczewicz
360
2
0
13 Jul 2023
Increasing Performance And Sample Efficiency With Model-agnostic
  Interactive Feature Attributions
Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions
J. Michiels
Marina De Vos
Johan A. K. Suykens
LRMFAtt
294
1
0
28 Jun 2023
Survey of Trustworthy AI: A Meta Decision of AI
Survey of Trustworthy AI: A Meta Decision of AI
Caesar Wu
Yuan-Fang Li
Pascal Bouvry
419
3
0
01 Jun 2023
Post Hoc Explanations of Language Models Can Improve Language Models
Post Hoc Explanations of Language Models Can Improve Language ModelsNeural Information Processing Systems (NeurIPS), 2023
Satyapriya Krishna
Jiaqi Ma
Dylan Slack
Asma Ghandeharioun
Sameer Singh
Himabindu Lakkaraju
ReLMLRM
270
78
0
19 May 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision MakingThe AI Magazine (AI Mag.), 2023
Raymond Fok
Daniel S. Weld
432
93
0
12 May 2023
Towards a Praxis for Intercultural Ethics in Explainable AI
Towards a Praxis for Intercultural Ethics in Explainable AI
Chinasa T. Okolo
299
8
0
24 Apr 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UKConference on Fairness, Accountability and Transparency (FAccT), 2023
L. Nannini
Agathe Balayn
A. Smith
301
60
0
20 Apr 2023
From Explanation to Action: An End-to-End Human-in-the-loop Framework
  for Anomaly Reasoning and Management
From Explanation to Action: An End-to-End Human-in-the-loop Framework for Anomaly Reasoning and Management
Xueying Ding
Nikita Seleznev
Senthil Kumar
C. Bayan Bruss
Leman Akoglu
307
6
0
06 Apr 2023
Reckoning with the Disagreement Problem: Explanation Consensus as a
  Training Objective
Reckoning with the Disagreement Problem: Explanation Consensus as a Training ObjectiveAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2023
Avi Schwarzschild
Max Cembalest
K. Rao
Keegan E. Hines
John P Dickerson
FAtt
210
17
0
23 Mar 2023
Assisting Human Decisions in Document Matching
Assisting Human Decisions in Document Matching
Joon Sik Kim
Valerie Chen
Danish Pruthi
Nihar B. Shah
Ameet Talwalkar
332
6
0
16 Feb 2023
A Case Study on Designing Evaluations of ML Explanations with Simulated
  User Studies
A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
Ada Martin
Valerie Chen
Sérgio Jesus
Pedro Saleiro
222
2
0
15 Feb 2023
A Detailed Study of Interpretability of Deep Neural Network based Top
  Taggers
A Detailed Study of Interpretability of Deep Neural Network based Top Taggers
Ayush Khot
Mark S. Neubauer
Avik Roy
AAML
569
22
0
09 Oct 2022
A Comprehensive Survey on Trustworthy Recommender Systems
A Comprehensive Survey on Trustworthy Recommender Systems
Wenqi Fan
Xiangyu Zhao
Xiao Chen
Jingran Su
Jingtong Gao
...
Qidong Liu
Yiqi Wang
Hanfeng Xu
Lei Chen
Qing Li
FaML
287
67
0
21 Sep 2022
Change Detection for Local Explainability in Evolving Data Streams
Change Detection for Local Explainability in Evolving Data StreamsInternational Conference on Information and Knowledge Management (CIKM), 2022
Johannes Haug
Alexander Braun
Stefan Zurn
Gjergji Kasneci
FAtt
239
14
0
06 Sep 2022
Human-AI Collaboration in Decision-Making: Beyond Learning to Defer
Human-AI Collaboration in Decision-Making: Beyond Learning to Defer
Diogo Leitao
Pedro Saleiro
Mário A. T. Figueiredo
P. Bizarro
289
28
0
27 Jun 2022
A Test for Evaluating Performance in Human-Computer Systems
A Test for Evaluating Performance in Human-Computer Systems
Andres Campero
Michelle Vaccaro
Jaeyoon Song
Haoran Wen
Abdullah Almaatouq
Thomas W. Malone
405
23
0
24 Jun 2022
On the Importance of Application-Grounded Experimental Design for
  Evaluating Explainable ML Methods
On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Kasun Amarasinghe
Kit T. Rodolfa
Sérgio Jesus
Valerie Chen
Vladimir Balayan
Pedro Saleiro
P. Bizarro
Ameet Talwalkar
Rayid Ghani
307
0
0
24 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
607
188
0
22 Jun 2022
Connecting Algorithmic Research and Usage Contexts: A Perspective of
  Contextualized Evaluation for Explainable AI
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AIAAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2022
Q. V. Liao
Yunfeng Zhang
Ronny Luss
Finale Doshi-Velez
Amit Dhurandhar
346
95
0
22 Jun 2022
On the Bias-Variance Characteristics of LIME and SHAP in High Sparsity
  Movie Recommendation Explanation Tasks
On the Bias-Variance Characteristics of LIME and SHAP in High Sparsity Movie Recommendation Explanation Tasks
Claudia V. Roberts
Ehtsham Elahi
Ashok Chandrashekar
FAtt
243
5
0
09 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation EvaluationNeural Information Processing Systems (NeurIPS), 2022
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAttELM
347
26
0
05 Jun 2022
A Psychological Theory of Explainability
A Psychological Theory of ExplainabilityInternational Conference on Machine Learning (ICML), 2022
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAIFAtt
266
21
0
17 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc ExplanationsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2022
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
310
71
0
15 May 2022
Framework for Evaluating Faithfulness of Local Explanations
Framework for Evaluating Faithfulness of Local ExplanationsInternational Conference on Machine Learning (ICML), 2022
S. Dasgupta
Nave Frost
Michal Moshkovitz
FAtt
470
84
0
01 Feb 2022
12
Next
Page 1 of 2