ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.03454
  4. Cited By
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks

Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks

10 January 2018
Ruth C. Fong
Andrea Vedaldi
    FAtt
ArXivPDFHTML

Papers citing "Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks"

50 / 50 papers shown
Title
ChannelExplorer: Exploring Class Separability Through Activation Channel Visualization
ChannelExplorer: Exploring Class Separability Through Activation Channel Visualization
Md Rahat-uz- Zaman
Bei Wang
Paul Rosen
28
0
0
06 May 2025
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
49
2
0
11 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
40
4
0
27 May 2024
Linear Explanations for Individual Neurons
Linear Explanations for Individual Neurons
Tuomas P. Oikarinen
Tsui-Wei Weng
FAtt
MILM
31
6
0
10 May 2024
A Multimodal Automated Interpretability Agent
A Multimodal Automated Interpretability Agent
Tamar Rott Shaham
Sarah Schwettmann
Franklin Wang
Achyuta Rajaram
Evan Hernandez
Jacob Andreas
Antonio Torralba
39
18
0
22 Apr 2024
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
Usha Bhalla
Alexander X. Oesterling
Suraj Srinivas
Flavio du Pin Calmon
Himabindu Lakkaraju
49
36
0
16 Feb 2024
PICNN: A Pathway towards Interpretable Convolutional Neural Networks
PICNN: A Pathway towards Interpretable Convolutional Neural Networks
Wengang Guo
Jiayi Yang
Huilin Yin
Qijun Chen
Wei Ye
39
3
0
19 Dec 2023
Codebook Features: Sparse and Discrete Interpretability for Neural
  Networks
Codebook Features: Sparse and Discrete Interpretability for Neural Networks
Alex Tamkin
Mohammad Taufeeque
Noah D. Goodman
35
27
0
26 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
36
1
0
26 Sep 2023
Interpretation on Multi-modal Visual Fusion
Interpretation on Multi-modal Visual Fusion
Hao Chen
Hao Zhou
Yongjian Deng
39
0
0
19 Aug 2023
Identifying Interpretable Subspaces in Image Representations
Identifying Interpretable Subspaces in Image Representations
Neha Kalibhat
S. Bhardwaj
Bayan Bruss
Hamed Firooz
Maziar Sanjabi
S. Feizi
FAtt
44
26
0
20 Jul 2023
Causal Analysis for Robust Interpretability of Neural Networks
Causal Analysis for Robust Interpretability of Neural Networks
Ola Ahmad
Nicolas Béreux
Loïc Baret
V. Hashemi
Freddy Lecue
CML
29
3
0
15 May 2023
UFO: A unified method for controlling Understandability and Faithfulness
  Objectives in concept-based explanations for CNNs
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
40
0
0
27 Mar 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
52
18
0
30 Dec 2022
Towards Human-Interpretable Prototypes for Visual Assessment of Image
  Classification Models
Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
Poulami Sinhamahapatra
Lena Heidemann
Maureen Monnet
Karsten Roscher
47
5
0
22 Nov 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
43
108
0
02 Oct 2022
Formal Conceptual Views in Neural Networks
Formal Conceptual Views in Neural Networks
Johannes Hirth
Tom Hanika
23
2
0
27 Sep 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
28
124
0
27 Jul 2022
Disentangling visual and written concepts in CLIP
Disentangling visual and written concepts in CLIP
Joanna Materzyñska
Antonio Torralba
David Bau
CoGe
34
47
0
15 Jun 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
145
186
0
31 May 2022
Exploring Hidden Semantics in Neural Networks with Symbolic Regression
Exploring Hidden Semantics in Neural Networks with Symbolic Regression
Yuanzhen Luo
Qiang Lu
Xilei Hu
Jake Luo
Zhiguang Wang
13
0
0
22 Apr 2022
Deep Unlearning via Randomized Conditionally Independent Hessians
Deep Unlearning via Randomized Conditionally Independent Hessians
Ronak R. Mehta
Sourav Pal
Vikas Singh
Sathya Ravi
MU
27
81
0
15 Apr 2022
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee
Zijie J. Wang
Judy Hoffman
Duen Horng Chau
31
11
0
12 Apr 2022
Concept Evolution in Deep Learning Training: A Unified Interpretation
  Framework and Discoveries
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Li
Judy Hoffman
Duen Horng Chau
33
2
0
30 Mar 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
34
28
0
25 Mar 2022
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
25
6
0
11 Mar 2022
Explaining, Evaluating and Enhancing Neural Networks' Learned
  Representations
Explaining, Evaluating and Enhancing Neural Networks' Learned Representations
Marco Bertolini
Djork-Arné Clevert
F. Montanari
FAtt
16
5
0
18 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
53
15
0
23 Jan 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN
  Interpretability
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
21
1
0
31 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Semantic Communications With AI Tasks
Semantic Communications With AI Tasks
Yang Yang
Caili Guo
Fangfang Liu
Chuanhong Liu
Lunan Sun
Qizheng Sun
Jiujiu Chen
39
36
0
29 Sep 2021
Explaining Convolutional Neural Networks by Tagging Filters
Explaining Convolutional Neural Networks by Tagging Filters
Anna Nguyen
Daniel Hagenmayer
T. Weller
Michael Färber
FAtt
19
0
0
20 Sep 2021
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in
  Deep Neural Networks
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks
Haekyu Park
Nilaksh Das
Rahul Duggal
Austin P. Wright
Omar Shaikh
Fred Hohman
Duen Horng Chau
HAI
26
25
0
29 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
40
17
0
11 Aug 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
34
184
0
15 May 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
35
8
0
16 Mar 2021
Quantifying Learnability and Describability of Visual Concepts Emerging
  in Representation Learning
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
33
13
0
27 Oct 2020
Now You See Me (CME): Concept-based Model Extraction
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
Adrian Weller
25
72
0
25 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
47
7
0
23 Oct 2020
Contextual Semantic Interpretability
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
20
27
0
18 Sep 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
25
411
0
18 Oct 2019
Learning Generalisable Omni-Scale Representations for Person
  Re-Identification
Learning Generalisable Omni-Scale Representations for Person Re-Identification
Kaiyang Zhou
Yongxin Yang
Andrea Cavallaro
Tao Xiang
30
217
0
15 Oct 2019
Understanding Neural Networks via Feature Visualization: A survey
Understanding Neural Networks via Feature Visualization: A survey
Anh Nguyen
J. Yosinski
Jeff Clune
FAtt
19
160
0
18 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
24
214
0
04 Apr 2019
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
23
54
0
18 Dec 2018
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Adam Sutton
Thomas Lansdall-Welfare
N. Cristianini
26
23
0
16 Jun 2018
Intriguing Properties of Randomly Weighted Networks: Generalizing While
  Learning Next to Nothing
Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing
Amir Rosenfeld
John K. Tsotsos
MLT
32
51
0
02 Feb 2018
Do semantic parts emerge in Convolutional Neural Networks?
Do semantic parts emerge in Convolutional Neural Networks?
Abel Gonzalez-Garcia
Davide Modolo
V. Ferrari
158
113
0
13 Jul 2016
1