Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1907.06831
Cited By
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
16 July 2019
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating Explanation Without Ground Truth in Interpretable Machine Learning"
33 / 33 papers shown
Title
On Benchmarking Code LLMs for Android Malware Analysis
Yiling He
Hongyu She
Xingzhi Qian
Xinran Zheng
Zhuo Chen
Z. Qin
Lorenzo Cavallaro
ELM
43
1
0
01 Apr 2025
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are Absent
Zeyu He
Saniya Naphade
Ting-Hao 'Kenneth' Huang
33
0
0
16 Feb 2025
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting
Maxime Kayser
Bayar I. Menzat
Cornelius Emde
Bogdan Bercean
Alex Novak
Abdala Espinosa
B. Papież
Susanne Gaube
Thomas Lukasiewicz
Oana-Maria Camburu
13
1
0
16 Oct 2024
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
X. J. Yang
Boyang Albert Li
11
3
0
25 Sep 2023
When a CBR in Hand is Better than Twins in the Bush
Mobyen Uddin Ahmed
Shaibal Barua
Shahina Begum
Mir Riyanul Islam
Rosina O. Weber
20
1
0
09 May 2023
Tracr: Compiled Transformers as a Laboratory for Interpretability
David Lindner
János Kramár
Sebastian Farquhar
Matthew Rahtz
Tom McGrath
Vladimir Mikulik
14
69
0
12 Jan 2023
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
18
123
0
27 Jul 2022
Interactive Machine Learning: A State of the Art Review
Natnael A. Wondimu
Cédric Buche
U. Visser
VLM
HAI
11
9
0
13 Jul 2022
Why we do need Explainable AI for Healthcare
Giovanni Cina
Tabea E. Rober
Rob Goedhart
Ilker Birbil
14
13
0
30 Jun 2022
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
Lijie Wang
Yaozong Shen
Shu-ping Peng
Shuai Zhang
Xinyan Xiao
Hao Liu
Hongxuan Tang
Ying Chen
Hua-Hong Wu
Haifeng Wang
ELM
8
21
0
23 May 2022
Explaining Classifiers by Constructing Familiar Concepts
Johannes Schneider
M. Vlachos
27
14
0
07 Mar 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
21
388
0
20 Jan 2022
Few-Shot Self-Rationalization with Natural Language Prompts
Ana Marasović
Iz Beltagy
Doug Downey
Matthew E. Peters
LRM
10
105
0
16 Nov 2021
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
40
27
0
07 Oct 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAtt
CoGe
28
1
0
16 Sep 2021
Developing a Fidelity Evaluation Approach for Interpretable Machine Learning
M. Velmurugan
Chun Ouyang
Catarina Moreira
Renuka Sindhgatta
XAI
11
16
0
16 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
11
136
0
17 May 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
11
141
0
24 Feb 2021
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAtt
XAI
23
18
0
31 Dec 2020
Data Representing Ground-Truth Explanations to Evaluate XAI Methods
S. Amiri
Rosina O. Weber
Prateek Goel
Owen Brooks
Archer Gandley
Brian Kitchell
Aaron Zehm
XAI
22
8
0
18 Nov 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
8
93
0
22 Sep 2020
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability
Ninghao Liu
Yunsong Meng
Xia Hu
Tie Wang
Bo Long
XAI
FAtt
10
7
0
16 Sep 2020
Explainable Rumor Detection using Inter and Intra-feature Attention Networks
Mingxuan Chen
Ning Wang
K. P. Subbalakshmi
6
8
0
21 Jul 2020
Explaining Neural Networks by Decoding Layer Activations
Johannes Schneider
Michalis Vlachos
AI4CE
9
14
0
27 May 2020
Explainable Matrix -- Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Mário Popolin Neto
F. Paulovich
FAtt
17
86
0
08 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
12
213
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
29
366
0
30 Apr 2020
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
6
212
0
09 Mar 2020
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts
Mostafa Karimi
Di Wu
Zhangyang Wang
Yang Shen
19
46
0
29 Dec 2019
Towards a Unified Evaluation of Explanation Methods without Ground Truth
Hao Zhang
Jiayi Chen
Haotian Xue
Quanshi Zhang
XAI
11
7
0
20 Nov 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
293
0
17 Oct 2019
Learning Credible Deep Neural Networks with Rationale Regularization
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
FaML
10
45
0
13 Aug 2019
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,231
0
24 Jun 2017
1