Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09392
Cited By
On the (In)fidelity and Sensitivity for Explanations
27 January 2019
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the (In)fidelity and Sensitivity for Explanations"
50 / 69 papers shown
Title
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Sonal Allana
Mohan Kankanhalli
Rozita Dara
27
0
0
05 May 2025
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
41
0
0
02 May 2025
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
95
1
0
13 Mar 2025
Interpretable High-order Knowledge Graph Neural Network for Predicting Synthetic Lethality in Human Cancers
Xuexin Chen
Ruichu Cai
Zhengting Huang
Zijian Li
Jie Zheng
Min Wu
41
0
0
08 Mar 2025
XEQ Scale for Evaluating XAI Experience Quality
A. Wijekoon
Nirmalie Wiratunga
D. Corsar
Kyle Martin
Ikechukwu Nkisi-Orji
Belén Díaz-Agudo
Derek Bridge
44
2
0
20 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
34
2
0
03 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
91
0
0
30 Dec 2024
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
37
5
0
03 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
52
5
0
02 May 2024
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
35
2
0
06 Apr 2024
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
32
0
0
04 Apr 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation
Xuexin Chen
Ruichu Cai
Zhengting Huang
Yuxuan Zhu
Julien Horwood
Zhifeng Hao
Zijian Li
Jose Miguel Hernandez-Lobato
AAML
36
2
0
13 Feb 2024
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
25
11
0
19 Jan 2024
Manifold-based Shapley for SAR Recognization Network Explanation
Xuran Hu
Mingzhe Zhu
Yuanjing Liu
Zhenpeng Feng
Ljubiša Stanković
FAtt
GAN
15
3
0
06 Jan 2024
An adversarial attack approach for eXplainable AI evaluation on deepfake detection models
Balachandar Gowrisankar
V. Thing
AAML
26
11
0
08 Dec 2023
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
21
5
0
29 Nov 2023
Evaluating Explanation Methods for Vision-and-Language Navigation
Guanqi Chen
Lei Yang
Guanhua Chen
Jia Pan
XAI
21
0
0
10 Oct 2023
Beyond XAI:Obstacles Towards Responsible AI
Yulu Pi
32
2
0
07 Sep 2023
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
19
4
0
06 Aug 2023
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
24
12
0
26 Jul 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
31
27
0
12 Apr 2023
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
28
22
0
17 Jan 2023
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
18
68
0
22 Dec 2022
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
16
3
0
07 Dec 2022
SEAT: Stable and Explainable Attention
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
OOD
18
18
0
23 Nov 2022
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
14
4
0
17 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
27
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Privacy Meets Explainability: A Comprehensive Impact Benchmark
S. Saifullah
Dominique Mercier
Adriano Lucieri
Andreas Dengel
Sheraz Ahmed
19
14
0
08 Nov 2022
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
35
2
0
18 Sep 2022
ScoreCAM GNN: une explication optimale des réseaux profonds sur graphes
Adrien Raison
Pascal Bourdon
David Helbert
FAtt
GNN
19
0
0
26 Jul 2022
LightX3ECG: A Lightweight and eXplainable Deep Learning System for 3-lead Electrocardiogram Classification
Khiem H. Le
Hieu H. Pham
Thao BT. Nguyen
Tu Nguyen
T. Thanh
Cuong D. Do
13
34
0
25 Jul 2022
Faithful Explanations for Deep Graph Models
Zifan Wang
Yuhang Yao
Chaoran Zhang
Han Zhang
Youjie Kang
Carlee Joe-Wong
Matt Fredrikson
Anupam Datta
FAtt
14
2
0
24 May 2022
Interpretability of Machine Learning Methods Applied to Neuroimaging
Elina Thibeau-Sutre
S. Collin
Ninon Burgos
O. Colliot
13
4
0
14 Apr 2022
Explainable Analysis of Deep Learning Methods for SAR Image Classification
Sheng Su
Ziteng Cui
Weiwei Guo
Zenghui Zhang
Wenxian Yu
XAI
20
12
0
14 Apr 2022
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
22
25
0
25 Feb 2022
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
17
20
0
24 Feb 2022
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
6
22
0
22 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
18
41
0
15 Feb 2022
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
9
11
0
08 Feb 2022
Towards a consistent interpretation of AIOps models
Yingzhe Lyu
Gopi Krishnan Rajbahadur
Dayi Lin
Boyuan Chen
Zhen Ming
Z. Jiang
AI4CE
14
19
0
04 Feb 2022
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
31
55
0
05 Dec 2021
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
39
11
0
08 Nov 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
19
6
0
19 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
21
14
0
16 Oct 2021
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
36
6
0
08 Sep 2021
A Survey on Automated Fact-Checking
Zhijiang Guo
M. Schlichtkrull
Andreas Vlachos
27
456
0
26 Aug 2021
Semantic Concentration for Domain Adaptation
Shuang Li
Mixue Xie
Fangrui Lv
Chi Harold Liu
Jian Liang
C. Qin
Wei Li
52
87
0
12 Aug 2021
1
2
Next