Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00867
Cited By
The (Un)reliability of saliency methods
2 November 2017
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The (Un)reliability of saliency methods"
50 / 118 papers shown
Title
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
152
0
0
24 Apr 2025
Saliency-driven Dynamic Token Pruning for Large Language Models
Yao Tao
Yehui Tang
Yun Wang
Mingjian Zhu
Hailin Hu
Yunhe Wang
34
0
0
06 Apr 2025
Towards Responsible and Trustworthy Educational Data Mining: Comparing Symbolic, Sub-Symbolic, and Neural-Symbolic AI Methods
Danial Hooshyar
Eve Kikas
Yeongwook Yang
Gustav Šír
Raija Hamalainen
T. Karkkainen
Roger Azevedo
56
1
0
01 Apr 2025
A Unified Framework with Novel Metrics for Evaluating the Effectiveness of XAI Techniques in LLMs
Melkamu Mersha
Mesay Gemeda Yigezu
Hassan Shakil
Ali Al shami
SangHyun Byun
Jugal Kalita
59
0
0
06 Mar 2025
Error-controlled non-additive interaction discovery in machine learning models
Winston Chen
Yifan Jiang
William Stafford Noble
Yang Young Lu
45
1
0
17 Feb 2025
Evaluating the Effectiveness of XAI Techniques for Encoder-Based Language Models
Melkamu Mersha
Mesay Gemeda Yigezu
Jugal Kalita
ELM
49
3
0
26 Jan 2025
GL-ICNN: An End-To-End Interpretable Convolutional Neural Network for the Diagnosis and Prediction of Alzheimer's Disease
Wenjie Kang
L. Jiskoot
Peter De Deyn
G. Biessels
Huiberdina Koek
...
H. Middelkoop
W. M. van der Flier
Willemijn J. Jansen
S. Klein
Esther E. Bron
29
0
0
20 Jan 2025
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
136
0
0
10 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
34
2
0
03 Jan 2025
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
61
1
0
01 Jul 2024
Exposing Image Classifier Shortcuts with Counterfactual Frequency (CoF) Tables
James Hinns
David Martens
41
2
0
24 May 2024
Robust Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
23
0
0
03 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
54
5
0
02 May 2024
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
27
4
0
21 Apr 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
3VL: Using Trees to Improve Vision-Language Models' Interpretability
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGe
VLM
49
4
0
28 Dec 2023
CAManim: Animating end-to-end network activation maps
Emily Kaczmarek
Olivier X. Miguel
Alexa C. Bowie
R. Ducharme
Alysha L. J. Dingwall-Harvey
S. Hawken
Christine M. Armour
Mark C. Walker
Kevin Dick
HAI
19
1
0
19 Dec 2023
CEIR: Concept-based Explainable Image Representation Learning
Yan Cui
Shuhong Liu
Liuzhuozheng Li
Zhiyuan Yuan
SSL
VLM
23
3
0
17 Dec 2023
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
24
5
0
29 Nov 2023
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
28
2
0
09 Nov 2023
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
80
7
0
14 Sep 2023
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
24
12
0
26 Jul 2023
Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Eldor Abdukhamidov
Mohammed Abuhamad
George K. Thiruvathukal
Hyoungshick Kim
Tamer Abuhmed
AAML
25
2
0
12 Jul 2023
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
30
0
0
07 Jul 2023
Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?
Felix Wichmann
Robert Geirhos
32
25
0
26 May 2023
Take 5: Interpretable Image Classification with a Handful of Features
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
29
7
0
23 Mar 2023
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
N. Jethani
A. Saporta
Rajesh Ranganath
FAtt
29
10
0
24 Feb 2023
The Generalizability of Explanations
Hanxiao Tan
FAtt
13
1
0
23 Feb 2023
Tell Model Where to Attend: Improving Interpretability of Aspect-Based Sentiment Classification via Small Explanation Annotations
Zhenxiao Cheng
Jie Zhou
Wen Wu
Qin Chen
Liang He
22
3
0
21 Feb 2023
sMRI-PatchNet: A novel explainable patch-based deep learning network for Alzheimer's disease diagnosis and discriminative atrophy localisation with Structural MRI
Xin Zhang
Liangxiu Han
Lianghao Han
Haoming Chen
Darren Dancey
Daoqiang Zhang
MedIm
13
4
0
17 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
26
3
0
14 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
Variational Information Pursuit for Interpretable Predictions
Aditya Chattopadhyay
Kwan Ho Ryan Chan
B. Haeffele
D. Geman
René Vidal
DRL
15
10
0
06 Feb 2023
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
28
8
0
13 Jan 2023
Valid P-Value for Deep Learning-Driven Salient Region
Daiki Miwa
Vo Nguyen Le Duy
I. Takeuchi
FAtt
AAML
19
14
0
06 Jan 2023
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
22
1
0
15 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
24
14
0
13 Dec 2022
This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy
Anuja Vats
A. Mohammed
Marius Pedersen
Nirmalie Wiratunga
MedIm
22
9
0
05 Dec 2022
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
25
18
0
29 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
25
5
0
29 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
16
8
0
23 Nov 2022
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
26
4
0
22 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
16
0
0
15 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
29
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Explainable Deep Learning to Profile Mitochondrial Disease Using High Dimensional Protein Expression Data
Atif Khan
C. Lawless
Amy Vincent
Satish Pilla
S. Ramesh
A. Mcgough
28
0
0
31 Oct 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
21
1
0
05 Oct 2022
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
29
1
0
19 Sep 2022
1
2
3
Next