Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1710.10547
Cited By
Interpretation of Neural Networks is Fragile
29 October 2017
Amirata Ghorbani
Abubakar Abid
James Y. Zou
FAtt
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretation of Neural Networks is Fragile"
50 / 467 papers shown
Title
Encoding Concepts in Graph Neural Networks
Lucie Charlotte Magister
Pietro Barbiero
Dmitry Kazhdan
F. Siciliano
Gabriele Ciravegna
Fabrizio Silvestri
M. Jamnik
Pietro Lio'
30
21
0
27 Jul 2022
Equivariant and Invariant Grounding for Video Question Answering
Yicong Li
Xiang Wang
Junbin Xiao
Tat-Seng Chua
18
25
0
26 Jul 2022
Calibrate to Interpret
Gregory Scafarto
N. Posocco
Antoine Bonnefoy
FaML
11
3
0
07 Jul 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
41
3
0
24 Jun 2022
Robustness of Explanation Methods for NLP Models
Shriya Atmakuri
Tejas Chheda
Dinesh Kandula
Nishant Yadav
Taesung Lee
Hessel Tuinhof
FAtt
AAML
19
4
0
24 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
26
140
0
22 Jun 2022
Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models
Emily Black
Hadi Elzayn
Alexandra Chouldechova
Jacob Goldin
Daniel E. Ho
MLAU
25
25
0
20 Jun 2022
Efficiently Training Low-Curvature Neural Networks
Suraj Srinivas
Kyle Matoba
Himabindu Lakkaraju
F. Fleuret
AAML
23
15
0
14 Jun 2022
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective
M. Serrurier
Franck Mamalet
Thomas Fel
Louis Bethune
Thibaut Boissin
AAML
FAtt
32
4
0
14 Jun 2022
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
Paul Novello
Thomas Fel
David Vigouroux
FAtt
14
27
0
13 Jun 2022
Xplique: A Deep Learning Explainability Toolbox
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
...
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
11
30
0
09 Jun 2022
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Mohamed Karim Belaid
Eyke Hüllermeier
Maximilian Rabus
Ralf Krestel
ELM
16
0
0
08 Jun 2022
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
22
20
0
07 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
30
8
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
37
9
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
22
24
0
05 Jun 2022
Interpretable Mixture of Experts
Aya Abdelsalam Ismail
Sercan Ö. Arik
Jinsung Yoon
Ankur Taly
S. Feizi
Tomas Pfister
MoE
20
10
0
05 Jun 2022
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
32
87
0
02 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
Scalable Interpretability via Polynomials
Abhimanyu Dubey
Filip Radenovic
D. Mahajan
4
30
0
27 May 2022
Towards a Theory of Faithfulness: Faithful Explanations of Differentiable Classifiers over Continuous Data
Nico Potyka
Xiang Yin
Francesca Toni
FAtt
11
2
0
19 May 2022
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
68
8
0
18 May 2022
Sparse Visual Counterfactual Explanations in Image Space
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDL
CML
30
26
0
16 May 2022
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection
Fan Wang
A. Kong
63
10
0
15 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aivodji
Stephen H. Bach
Himabindu Lakkaraju
40
56
0
15 May 2022
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
24
52
0
10 May 2022
Should attention be all we need? The epistemic and ethical implications of unification in machine learning
N. Fishman
Leif Hancox-Li
25
10
0
09 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
38
77
0
06 May 2022
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
11
25
0
30 Apr 2022
Poly-CAM: High resolution class activation map for convolutional neural networks
A. Englebert
O. Cornu
Christophe De Vleeschouwer
22
10
0
28 Apr 2022
It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers
Zheng Tang
Mihai Surdeanu
19
6
0
25 Apr 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
A Survey and Perspective on Artificial Intelligence for Security-Aware Electronic Design Automation
D. Koblah
R. Acharya
Daniel Capecci
Olivia P. Dizon-Paradis
Shahin Tajik
F. Ganji
D. Woodard
Domenic Forte
20
12
0
19 Apr 2022
Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention
Yu Yang
Seung Wook Kim
Jungseock Joo
FAtt
11
17
0
10 Apr 2022
Explainability in Process Outcome Prediction: Guidelines to Obtain Interpretable and Faithful Models
Alexander Stevens
Johannes De Smedt
XAI
FaML
12
12
0
30 Mar 2022
Interpretable Prediction of Pulmonary Hypertension in Newborns using Echocardiograms
H. Ragnarsdóttir
Laura Manduchi
H. Michel
F. Laumer
S. Wellmann
Ece Ozkan
Julia-Franziska Vogt
13
3
0
24 Mar 2022
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Hanjie Chen
Yangfeng Ji
OOD
AAML
VLM
24
21
0
23 Mar 2022
Rethinking Stability for Attribution-based Explanations
Chirag Agarwal
Nari Johnson
Martin Pawelczyk
Satyapriya Krishna
Eshika Saxena
Marinka Zitnik
Himabindu Lakkaraju
FAtt
22
50
0
14 Mar 2022
Explaining Classifiers by Constructing Familiar Concepts
Johannes Schneider
M. Vlachos
27
15
0
07 Mar 2022
Concept-based Explanations for Out-Of-Distribution Detectors
Jihye Choi
Jayaram Raghuram
Ryan Feng
Jiefeng Chen
S. Jha
Atul Prakash
OODD
19
12
0
04 Mar 2022
Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths
Amir Hossein Akhavan Rahnama
Judith Butepage
XAI
FAtt
11
0
0
04 Mar 2022
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
21
22
0
22 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
Rethinking Explainability as a Dialogue: A Practitioner's Perspective
Himabindu Lakkaraju
Dylan Slack
Yuxin Chen
Chenhao Tan
Sameer Singh
LRM
16
64
0
03 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
186
0
03 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning
Amit Dhurandhar
K. Ramamurthy
Kartik Ahuja
Vijay Arya
FAtt
12
4
0
28 Jan 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
36
15
0
27 Jan 2022
A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes
Mazda Moayeri
Phillip E. Pope
Yogesh Balaji
S. Feizi
VLM
33
52
0
26 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
395
0
20 Jan 2022
Previous
1
2
3
4
5
6
...
8
9
10
Next