Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.14855
Cited By
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
30 December 2022
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces"
7 / 7 papers shown
Title
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
79
0
0
29 Nov 2024
MambaLRP: Explaining Selective State Space Sequence Models
F. Jafari
G. Montavon
Klaus-Robert Müller
Oliver Eberle
Mamba
38
9
0
11 Jun 2024
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
42
4
0
28 Jun 2022
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
30
10
0
23 Jan 2022
High-Performance Large-Scale Image Recognition Without Normalization
Andrew Brock
Soham De
Samuel L. Smith
Karen Simonyan
VLM
220
450
0
11 Feb 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
115
293
0
17 Oct 2019
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
225
2,069
0
24 Jun 2017
1