Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.12016
Cited By
Towards falsifiable interpretability research
22 October 2020
Matthew L. Leavitt
Ari S. Morcos
AAML
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards falsifiable interpretability research"
22 / 22 papers shown
Title
A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i
Kola Ayonrinde
Louis Jaburi
MILM
86
1
0
01 May 2025
An Actionability Assessment Tool for Explainable AI
Ronal Singh
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
Piers Howe
Paul Dourish
27
2
0
19 Jun 2024
Acoustic characterization of speech rhythm: going beyond metrics with recurrent neural networks
Franccois Deloche
Laurent Bonnasse-Gahot
Judit Gervain
21
0
0
22 Jan 2024
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
24
0
0
20 Nov 2023
Identifying Interpretable Visual Features in Artificial and Biological Neural Systems
David A. Klindt
Sophia Sanborn
Francisco Acosta
Frédéric Poitevin
Nina Miolane
MILM
FAtt
44
7
0
17 Oct 2023
The Representational Status of Deep Learning Models
Eamon Duede
19
0
0
21 Mar 2023
Higher-order mutual information reveals synergistic sub-networks for multi-neuron importance
Kenzo Clauw
S. Stramaglia
Daniele Marinazzo
SSL
FAtt
30
6
0
01 Nov 2022
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
29
40
0
22 Aug 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
42
18
0
31 May 2022
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
25
22
0
21 Apr 2022
An explainability framework for cortical surface-based deep learning
Fernanda L. Ribeiro
S. Bollmann
R. Cunnington
A. M. Puckett
FAtt
AAML
MedIm
16
2
0
15 Mar 2022
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
19
73
0
07 Feb 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
FAtt
39
31
0
23 Jun 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
20
88
0
11 May 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
23
75
0
18 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Estimating Example Difficulty Using Variance of Gradients
Chirag Agarwal
Daniel D'souza
Sara Hooker
208
107
0
26 Aug 2020
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
50
33
0
03 Mar 2020
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
54
116
0
07 Jun 2018
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,235
0
24 Jun 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
242
3,681
0
28 Feb 2017
1