Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1801.03454
Cited By
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
10 January 2018
Ruth C. Fong
Andrea Vedaldi
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks"
49 / 49 papers shown
Title
ChannelExplorer: Exploring Class Separability Through Activation Channel Visualization
Md Rahat-uz- Zaman
Bei Wang
Paul Rosen
28
0
0
06 May 2025
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
49
2
0
11 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
40
4
0
27 May 2024
Linear Explanations for Individual Neurons
Tuomas P. Oikarinen
Tsui-Wei Weng
FAtt
MILM
31
6
0
10 May 2024
A Multimodal Automated Interpretability Agent
Tamar Rott Shaham
Sarah Schwettmann
Franklin Wang
Achyuta Rajaram
Evan Hernandez
Jacob Andreas
Antonio Torralba
39
18
0
22 Apr 2024
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
Usha Bhalla
Alexander X. Oesterling
Suraj Srinivas
Flavio du Pin Calmon
Himabindu Lakkaraju
49
36
0
16 Feb 2024
PICNN: A Pathway towards Interpretable Convolutional Neural Networks
Wengang Guo
Jiayi Yang
Huilin Yin
Qijun Chen
Wei Ye
39
3
0
19 Dec 2023
Codebook Features: Sparse and Discrete Interpretability for Neural Networks
Alex Tamkin
Mohammad Taufeeque
Noah D. Goodman
35
27
0
26 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
36
1
0
26 Sep 2023
Interpretation on Multi-modal Visual Fusion
Hao Chen
Hao Zhou
Yongjian Deng
39
0
0
19 Aug 2023
Identifying Interpretable Subspaces in Image Representations
Neha Kalibhat
S. Bhardwaj
Bayan Bruss
Hamed Firooz
Maziar Sanjabi
S. Feizi
FAtt
42
26
0
20 Jul 2023
Causal Analysis for Robust Interpretability of Neural Networks
Ola Ahmad
Nicolas Béreux
Loïc Baret
V. Hashemi
Freddy Lecue
CML
29
3
0
15 May 2023
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
40
0
0
27 Mar 2023
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
52
18
0
30 Dec 2022
Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
Poulami Sinhamahapatra
Lena Heidemann
Maureen Monnet
Karsten Roscher
47
5
0
22 Nov 2022
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
43
108
0
02 Oct 2022
Formal Conceptual Views in Neural Networks
Johannes Hirth
Tom Hanika
20
2
0
27 Sep 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
28
124
0
27 Jul 2022
Disentangling visual and written concepts in CLIP
Joanna Materzyñska
Antonio Torralba
David Bau
CoGe
31
47
0
15 Jun 2022
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
145
186
0
31 May 2022
Deep Unlearning via Randomized Conditionally Independent Hessians
Ronak R. Mehta
Sourav Pal
Vikas Singh
Sathya Ravi
MU
27
81
0
15 Apr 2022
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee
Zijie J. Wang
Judy Hoffman
Duen Horng Chau
26
11
0
12 Apr 2022
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Li
Judy Hoffman
Duen Horng Chau
28
2
0
30 Mar 2022
Concept Embedding Analysis: A Review
Gesina Schwalbe
34
28
0
25 Mar 2022
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
23
6
0
11 Mar 2022
Explaining, Evaluating and Enhancing Neural Networks' Learned Representations
Marco Bertolini
Djork-Arné Clevert
F. Montanari
FAtt
16
5
0
18 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
53
15
0
23 Jan 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
21
1
0
31 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Semantic Communications With AI Tasks
Yang Yang
Caili Guo
Fangfang Liu
Chuanhong Liu
Lunan Sun
Qizheng Sun
Jiujiu Chen
39
36
0
29 Sep 2021
Explaining Convolutional Neural Networks by Tagging Filters
Anna Nguyen
Daniel Hagenmayer
T. Weller
Michael Färber
FAtt
19
0
0
20 Sep 2021
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks
Haekyu Park
Nilaksh Das
Rahul Duggal
Austin P. Wright
Omar Shaikh
Fred Hohman
Duen Horng Chau
HAI
21
25
0
29 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
40
17
0
11 Aug 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
34
184
0
15 May 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
30
8
0
16 Mar 2021
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
33
13
0
27 Oct 2020
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
Adrian Weller
25
72
0
25 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
47
7
0
23 Oct 2020
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
20
27
0
18 Sep 2020
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
25
411
0
18 Oct 2019
Learning Generalisable Omni-Scale Representations for Person Re-Identification
Kaiyang Zhou
Yongxin Yang
Andrea Cavallaro
Tao Xiang
30
217
0
15 Oct 2019
Understanding Neural Networks via Feature Visualization: A survey
Anh Nguyen
J. Yosinski
Jeff Clune
FAtt
16
160
0
18 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
19
214
0
04 Apr 2019
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
23
54
0
18 Dec 2018
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Adam Sutton
Thomas Lansdall-Welfare
N. Cristianini
23
23
0
16 Jun 2018
Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing
Amir Rosenfeld
John K. Tsotsos
MLT
32
51
0
02 Feb 2018
Do semantic parts emerge in Convolutional Neural Networks?
Abel Gonzalez-Garcia
Davide Modolo
V. Ferrari
158
113
0
13 Jul 2016
1