ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.08164
  4. Cited By
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
v1v2v3 (latest)

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI

ACM Computing Surveys (ACM CSUR), 2022
20 January 2022
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
    ELMXAI
ArXiv (abs)PDFHTML

Papers citing "From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI"

50 / 218 papers shown
Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception
Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception
Anne Sielemann
Valentin Barner
Stefan Wolf
Masoud Roschani
Jens Ziehn
Juergen Beyerer
FAtt
167
1
0
05 Dec 2025
Rethinking AI Evaluation in Education: The TEACH-AI Framework and Benchmark for Generative AI Assistants
Rethinking AI Evaluation in Education: The TEACH-AI Framework and Benchmark for Generative AI Assistants
Shi Ding
Brian Magerko
ELM
339
0
0
28 Nov 2025
The Directed Prediction Change - Efficient and Trustworthy Fidelity Assessment for Local Feature Attribution Methods
The Directed Prediction Change - Efficient and Trustworthy Fidelity Assessment for Local Feature Attribution Methods
Kevin Iselborn
David Dembinsky
Adriano Lucieri
Andreas Dengel
AAMLFAtt
555
0
0
26 Nov 2025
Formal Abductive Latent Explanations for Prototype-Based Networks
Formal Abductive Latent Explanations for Prototype-Based Networks
Jules Soria
Zakaria Chihani
Julien Girard-Satabin
Alban Grastien
Romain Xu-Darme
Daniela Cancila
201
2
0
20 Nov 2025
FunnyNodules: A Customizable Medical Dataset Tailored for Evaluating Explainable AI
FunnyNodules: A Customizable Medical Dataset Tailored for Evaluating Explainable AI
Luisa Gallée
Yiheng Xiong
Meinrad Beer
Michael Götz
273
1
0
19 Nov 2025
From Confusion to Clarity: ProtoScore - A Framework for Evaluating Prototype-Based XAI
From Confusion to Clarity: ProtoScore - A Framework for Evaluating Prototype-Based XAIConference on Fairness, Accountability and Transparency (FAccT), 2025
Helena Monke
Benjamin Sae-Chew
Benjamin Frész
Marco F. Huber
166
1
0
11 Nov 2025
Leveraging Association Rules for Better Predictions and Better Explanations
Leveraging Association Rules for Better Predictions and Better Explanations
Gilles Audemard
S. Coste-Marquis
Pierre Marquis
Mehdi Sabiri
N. Szczepanski
182
0
0
21 Oct 2025
A Rectification-Based Approach for Distilling Boosted Trees into Decision Trees
A Rectification-Based Approach for Distilling Boosted Trees into Decision Trees
Gilles Audemard
S. Coste-Marquis
Pierre Marquis
Mehdi Sabiri
N. Szczepanski
146
0
0
21 Oct 2025
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Allen Daniel Sunny
139
0
0
17 Oct 2025
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
Aline Mangold
Juliane Zietz
Susanne Weinhold
Sebastian Pannasch
XAIELM
203
1
0
14 Oct 2025
ProtoMask: Segmentation-Guided Prototype Learning
ProtoMask: Segmentation-Guided Prototype Learning
Steffen Meinert
Philipp Schlinge
Nils Strodthoff
Martin Atzmueller
203
0
0
01 Oct 2025
TextCAM: Explaining Class Activation Map with Text
TextCAM: Explaining Class Activation Map with Text
Qiming Zhao
Xingjian Li
Xiaoyu Cao
Xiaolong Wu
Min Xu
VLM
143
0
0
01 Oct 2025
o-MEGA: Optimized Methods for Explanation Generation and Analysis
o-MEGA: Optimized Methods for Explanation Generation and Analysis
Ľuboš Kriš
Jaroslav Kopčan
Qiwei Peng
Andrej Ridzik
Marcel Veselý
Martin Tamajka
225
0
0
30 Sep 2025
EVO-LRP: Evolutionary Optimization of LRP for Interpretable Model Explanations
EVO-LRP: Evolutionary Optimization of LRP for Interpretable Model Explanations
Emerald Zhang
Julian Weaver
Samantha R Santacruz
Edward Castillo
220
0
0
28 Sep 2025
Towards Transparent AI: A Survey on Explainable Language Models
Towards Transparent AI: A Survey on Explainable Language Models
Avash Palikhe
Sribala Vidyadhari Chinta
Zhipeng Yin
Rui Guo
Qiang Duan
Jie Yang
Wenbin Zhang
288
3
0
25 Sep 2025
Explainable Graph Neural Networks: Understanding Brain Connectivity and Biomarkers in Dementia
Explainable Graph Neural Networks: Understanding Brain Connectivity and Biomarkers in Dementia
Niharika Tewari
Nguyen Linh Dan Le
Mujie Liu
Jing Ren
Ziqi Xu
Tabinda Sarwar
Veeky Baths
Feng Xia
273
1
0
23 Sep 2025
Cross-Attention is Half Explanation in Speech-to-Text Models
Cross-Attention is Half Explanation in Speech-to-Text Models
Sara Papi
Dennis Fucci
Marco Gaido
Matteo Negri
L. Bentivogli
LRM
226
1
0
22 Sep 2025
Towards a Transparent and Interpretable AI Model for Medical Image Classifications
Towards a Transparent and Interpretable AI Model for Medical Image ClassificationsCognitive Neurodynamics (Cogn Neurodyn), 2025
Binbin Wen
Yihang Wu
Tareef Daqqaq
Ahmad Chaddad
164
0
0
20 Sep 2025
Agentic AI for Financial Crime Compliance
Agentic AI for Financial Crime Compliance
Henrik Axelsen
Valdemar Licht
Jan Damsgaard
AIFin
125
1
0
16 Sep 2025
Temporal Counterfactual Explanations of Behaviour Tree Decisions
Temporal Counterfactual Explanations of Behaviour Tree Decisions
Tamlin Love
Antonio Andriella
Guillem Alenyà
CML
206
1
0
09 Sep 2025
A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring
A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process MonitoringInternational Conference on Business Process Management (BPM), 2025
Martin Käppel
Julian Neuberger
Felix Möhrlein
Sven Weinzierl
Martin Matzner
Stefan Jablonski
155
2
0
24 Aug 2025
Informative Post-Hoc Explanations Only Exist for Simple Functions
Informative Post-Hoc Explanations Only Exist for Simple Functions
Eric Günther
Balázs Szabados
Robi Bhattacharjee
Sebastian Bordt
U. V. Luxburg
FAtt
232
3
0
15 Aug 2025
Memorisation and forgetting in a learning Hopfield neural network: bifurcation mechanisms, attractors and basins
Memorisation and forgetting in a learning Hopfield neural network: bifurcation mechanisms, attractors and basins
Adam E. Essex
Natalia B. Janson
Rachel A. Norris
Alexander G. Balanov
220
1
0
14 Aug 2025
Adoption of Explainable Natural Language Processing: Perspectives from Industry and Academia on Practices and Challenges
Adoption of Explainable Natural Language Processing: Perspectives from Industry and Academia on Practices and Challenges
Mahdi Dhaini
Tobias Müller
Roksoliana Rabets
Gjergji Kasneci
119
0
0
13 Aug 2025
GRainsaCK: a Comprehensive Software Library for Benchmarking Explanations of Link Prediction Tasks on Knowledge Graphs
GRainsaCK: a Comprehensive Software Library for Benchmarking Explanations of Link Prediction Tasks on Knowledge Graphs
Roberto Barile
Claudia d’Amato
N. Fanizzi
109
0
0
12 Aug 2025
Beyond Technocratic XAI: The Who, What & How in Explanation Design
Beyond Technocratic XAI: The Who, What & How in Explanation Design
Ruchira Dhar
Stephanie Brandl
Ninell Oldenburg
Anders Søgaard
216
0
0
12 Aug 2025
Attribution Explanations for Deep Neural Networks: A Theoretical Perspective
Attribution Explanations for Deep Neural Networks: A Theoretical Perspective
Huiqi Deng
Hongbin Pei
Quanshi Zhang
Mengnan Du
FAtt
235
1
0
11 Aug 2025
Multimodal Attention-Aware Fusion for Diagnosing Distal Myopathy: Evaluating Model Interpretability and Clinician Trust
Multimodal Attention-Aware Fusion for Diagnosing Distal Myopathy: Evaluating Model Interpretability and Clinician Trust
Mohsen Abbaspour Onari
Lucie Charlotte Magister
Yaoxin Wu
Amalia Lupi
Dario Creazzo
...
Chao Zhang
Isel Grau
M. S. Nobile
Yingqian Zhang
Pietro Lio
216
0
0
02 Aug 2025
Transparent AI: The Case for Interpretability and Explainability
Transparent AI: The Case for Interpretability and Explainability
Dhanesh Ramachandram
Himanshu Joshi
Judy Zhu
Dhari Gandhi
Lucas Hartman
Ananya Raval
142
6
0
31 Jul 2025
Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Zhanna Kaufman
Madeline Endres
Cindy Xiong Bearfield
Yuriy Brun
233
3
0
31 Jul 2025
Unifying Post-hoc Explanations of Knowledge Graph Completions
Unifying Post-hoc Explanations of Knowledge Graph Completions
Alessandro Lonardi
Samy Badreddine
Tarek R. Besold
Pablo Sanchez Martin
148
0
0
29 Jul 2025
Trustworthy AI-based crack-tip segmentation using domain-guided explanations
Trustworthy AI-based crack-tip segmentation using domain-guided explanations
Jesco Talies
Eric Breitbarth
D. Melching
OOD
206
0
0
28 Jul 2025
Comprehensive Evaluation of Prototype Neural Networks
Comprehensive Evaluation of Prototype Neural Networks
Philipp Schlinge
Steffen Meinert
Martin Atzmueller
338
2
0
09 Jul 2025
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene ClassificationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (IEEE J-STARS), 2025
Jonas Klotz
Tom Burgert
Tim Siebert
447
4
0
08 Jul 2025
Argument-Based Consistency in Toxicity Explanations of LLMs
Argument-Based Consistency in Toxicity Explanations of LLMs
Ramaravind Kommiya Mothilal
Joanna Roy
Syed Ishtiaque Ahmed
Shion Guha
233
0
0
23 Jun 2025
A Survey on LLM-Assisted Clinical Trial Recruitment
A Survey on LLM-Assisted Clinical Trial Recruitment
Shrestha Ghosh
Moritz Schneider
Carina Reinicke
Carsten Eickhoff
360
0
0
18 Jun 2025
Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features
Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features
Miguel Lago
Ghada Zamzmi
Brandon Eich
Jana G. Delfino
XAIELM
161
0
0
16 Jun 2025
Why Do Class-Dependent Evaluation Effects Occur with Time Series Feature Attributions? A Synthetic Data Investigation
Why Do Class-Dependent Evaluation Effects Occur with Time Series Feature Attributions? A Synthetic Data Investigation
Gregor Baer
Isel Grau
Chao Zhang
Pieter Van Gorp
263
3
0
13 Jun 2025
WISCA: A Consensus-Based Approach to Harmonizing Interpretability in Tabular Datasets
WISCA: A Consensus-Based Approach to Harmonizing Interpretability in Tabular Datasets
A. Banegas-Luna
Horacio Pérez-Sánchez
Carlos Martínez-Cortés
173
0
0
06 Jun 2025
Sampling Preferences Yields Simple Trustworthiness Scores
Sampling Preferences Yields Simple Trustworthiness Scores
Sean Steinle
189
0
0
03 Jun 2025
XAI-Units: Benchmarking Explainability Methods with Unit Tests
XAI-Units: Benchmarking Explainability Methods with Unit TestsConference on Fairness, Accountability and Transparency (FAccT), 2025
Jun Rui Lee
Sadegh Emami
Michael David Hollins
Timothy C. H. Wong
Carlos Ignacio Villalobos Sánchez
Francesca Toni
Dekai Zhang
Adam Dejl
242
4
0
01 Jun 2025
Retrieval Augmented Decision-Making: A Requirements-Driven, Multi-Criteria Framework for Structured Decision Support
Retrieval Augmented Decision-Making: A Requirements-Driven, Multi-Criteria Framework for Structured Decision Support
H. Wu
Hongxin Zhang
Wei Chen
Jiazhi Xia
266
0
0
24 May 2025
Explainable embeddings with Distance Explainer
Explainable embeddings with Distance Explainer
Christiaan Meijer
E. G. Patrick Bos
477
0
0
21 May 2025
Growable and Interpretable Neural Control with Online Continual Learning for Autonomous Lifelong Locomotion Learning Machines
Growable and Interpretable Neural Control with Online Continual Learning for Autonomous Lifelong Locomotion Learning MachinesThe international journal of robotics research (IJRR), 2025
Arthicha Srisuchinnawong
Poramate Manoonpong
CLLLRM
379
3
0
17 May 2025
RanDeS: Randomized Delta Superposition for Multi-Model Compression
RanDeS: Randomized Delta Superposition for Multi-Model Compression
Hangyu Zhou
Aaron Gokaslan
Volodymyr Kuleshov
Bharath Hariharan
MoMe
286
0
0
16 May 2025
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
Seongun Kim
Sol A Kim
Geonhyeong Kim
Enver Menadjiev
Chanwoo Lee
Seongwook Chung
Nari Kim
Jaesik Choi
501
0
0
15 May 2025
Evaluating Model Explanations without Ground Truth
Evaluating Model Explanations without Ground TruthConference on Fairness, Accountability and Transparency (FAccT), 2025
Kaivalya Rawal
Zihao Fu
Eoin Delaney
Chris Russell
FAttXAI
362
5
0
15 May 2025
Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics
Evaluating Explanation Quality in X-IDS Using Feature Alignment MetricsARES (ARES), 2025
Mohammed Alquliti
Erisa Karafili
BooJoong Kang
XAI
357
1
0
12 May 2025
See What I Mean? CUE: A Cognitive Model of Understanding Explanations
See What I Mean? CUE: A Cognitive Model of Understanding Explanations
Tobias Labarta
Nhi Hoang
Katharina Weitz
Wojciech Samek
Sebastian Lapuschkin
Leander Weber
290
1
0
09 May 2025
Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity
Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity
Shahroz Tariq
Ronal Singh
Mohan Baruwal Chhetri
Surya Nepal
Cécile Paris
403
7
0
06 May 2025
12345
Next
Page 1 of 5