ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.05796
  4. Cited By
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations

Network Dissection: Quantifying Interpretability of Deep Visual Representations

19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
    MILMFAtt
ArXiv (abs)PDFHTML

Papers citing "Network Dissection: Quantifying Interpretability of Deep Visual Representations"

50 / 842 papers shown
Quantifying Local Specialization in Deep Neural Networks
Quantifying Local Specialization in Deep Neural Networks
Shlomi Hod
Daniel Filan
Stephen Casper
Andrew Critch
Stuart J. Russell
237
12
0
13 Oct 2021
Robust Feature-Level Adversaries are Interpretability Tools
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
700
32
0
07 Oct 2021
Exploring the Common Principal Subspace of Deep Features in Neural
  Networks
Exploring the Common Principal Subspace of Deep Features in Neural Networks
Haoran Liu
Haoyi Xiong
Yaqing Wang
Haozhe An
Dongrui Wu
Dejing Dou
85
2
0
06 Oct 2021
Self-conditioning pre-trained language models
Self-conditioning pre-trained language models
Xavier Suau
Luca Zappella
N. Apostoloff
245
15
0
30 Sep 2021
TSM: Temporal Shift Module for Efficient and Scalable Video
  Understanding on Edge Device
TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge DeviceIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Ji Lin
Chuang Gan
Kuan-Chieh Wang
Song Han
174
80
0
27 Sep 2021
Learning Interpretable Concept Groups in CNNs
Learning Interpretable Concept Groups in CNNs
Saurabh Varshneya
Antoine Ledent
Robert A. Vandermeulen
Yunwen Lei
Matthias Enders
Damian Borth
Matthias Kirchler
207
7
0
21 Sep 2021
Explaining Convolutional Neural Networks by Tagging Filters
Explaining Convolutional Neural Networks by Tagging Filters
Anna Nguyen
Daniel Hagenmayer
T. Weller
Michael Färber
FAtt
125
1
0
20 Sep 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAttCoGe
260
3
0
16 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image
  Classification Models: An Empirical Study
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
133
11
0
02 Sep 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
  Features in Images
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
141
3
0
25 Aug 2021
Interpreting Face Inference Models using Hierarchical Network Dissection
Interpreting Face Inference Models using Hierarchical Network DissectionInternational Journal of Computer Vision (IJCV), 2021
Divyang Teotia
Àgata Lapedriza
Sarah Ostadabbas
CVBM
227
4
0
23 Aug 2021
Explaining Bayesian Neural Networks
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Matthias Kirchler
BDLAAML
408
30
0
23 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Towards Interpretable Deep Networks for Monocular Depth EstimationIEEE International Conference on Computer Vision (ICCV), 2021
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
173
18
0
11 Aug 2021
Interpreting Generative Adversarial Networks for Interactive Image
  Generation
Interpreting Generative Adversarial Networks for Interactive Image Generation
Bolei Zhou
GAN
142
6
0
10 Aug 2021
COVID-view: Diagnosis of COVID-19 using Chest CT
COVID-view: Diagnosis of COVID-19 using Chest CTIEEE Transactions on Visualization and Computer Graphics (TVCG), 2021
Shreeraj Jadhav
Gaofeng Deng
M. Zawin
Arie Kaufman
159
23
0
09 Aug 2021
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Spatiotemporal Contrastive Learning of Facial Expressions in VideosAffective Computing and Intelligent Interaction (ACII), 2021
Shuvendu Roy
Ali Etemad
260
18
0
06 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for
  Explainability
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAttAAML
118
12
0
03 Aug 2021
Shared Interest: Measuring Human-AI Alignment to Identify Recurring
  Patterns in Model Behavior
Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model BehaviorInternational Conference on Human Factors in Computing Systems (CHI), 2021
Angie Boggust
Benjamin Hoover
Arvindmani Satyanarayan
Hendrik Strobelt
193
60
0
20 Jul 2021
One Map Does Not Fit All: Evaluating Saliency Map Explanation on
  Multi-Modal Medical Images
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
FAtt
203
22
0
11 Jul 2021
Using Causal Analysis for Conceptual Deep Learning Explanation
Using Causal Analysis for Conceptual Deep Learning ExplanationInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2021
Sumedha Singla
Stephen Wallace
Sofia Triantafillou
Kayhan Batmanghelich
CML
106
17
0
10 Jul 2021
Interpretable Compositional Convolutional Neural Networks
Interpretable Compositional Convolutional Neural NetworksInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Wen Shen
Zhihua Wei
Shikun Huang
Binbin Zhang
Jiaqi Fan
Ping Zhao
Quanshi Zhang
FAtt
164
40
0
09 Jul 2021
Subspace Clustering Based Analysis of Neural Networks
Subspace Clustering Based Analysis of Neural Networks
Uday Singh Saini
Pravallika Devineni
Evangelos E. Papalexakis
GNN
60
1
0
02 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
370
20
0
01 Jul 2021
Inverting and Understanding Object Detectors
Inverting and Understanding Object Detectors
Ang Cao
Justin Johnson
ObjD
201
3
0
26 Jun 2021
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Sandareka Wickramanayake
Wynne Hsu
Yang Deng
FaMLAI4CE
114
4
0
24 Jun 2021
Evaluation of Saliency-based Explainability Method
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAttXAI
158
17
0
24 Jun 2021
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
  Representations
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image RepresentationsIEEE Access (IEEE Access), 2021
Witold Oleszkiewicz
Dominika Basaj
Igor Sieradzki
Michal Górszczak
Barbara Rychalska
K. Lewandowska
Tomasz Trzciñski
Bartosz Zieliñski
SSL
196
3
0
21 Jun 2021
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
Feng He
Chuntung Chu
Yi Zheng
Jie Ren
Quanshi Zhang
107
27
0
21 Jun 2021
Cogradient Descent for Dependable Learning
Cogradient Descent for Dependable Learning
Runqi Wang
Baochang Zhang
Lian Zhuo
QiXiang Ye
David Doermann
120
0
0
20 Jun 2021
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
Guided Integrated Gradients: An Adaptive Path Method for Removing NoiseComputer Vision and Pattern Recognition (CVPR), 2021
A. Kapishnikov
Subhashini Venugopalan
Besim Avci
Benjamin D. Wedin
Michael Terry
Tolga Bolukbasi
269
124
0
17 Jun 2021
Best of both worlds: local and global explanations with
  human-understandable concepts
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
FAtt
234
31
0
16 Jun 2021
On the Evolution of Neuron Communities in a Deep Learning Architecture
On the Evolution of Neuron Communities in a Deep Learning Architecture
Sakib Mostafa
Debajyoti Mondal
GNN
182
3
0
08 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision ModelsNeural Information Processing Systems (NeurIPS), 2021
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
Aleksander Madry
244
44
0
07 Jun 2021
Improving Compositionality of Neural Networks by Decoding
  Representations to Inputs
Improving Compositionality of Neural Networks by Decoding Representations to InputsNeural Information Processing Systems (NeurIPS), 2021
Mike Wu
Noah D. Goodman
Stefano Ermon
AI4CE
124
3
0
01 Jun 2021
Drop Clause: Enhancing Performance, Interpretability and Robustness of
  the Tsetlin Machine
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine
Jivitesh Sharma
Rohan Kumar Yadav
Ole-Christoffer Granmo
Lei Jiao
VLM
212
13
0
30 May 2021
The Definitions of Interpretability and Learning of Interpretable Models
The Definitions of Interpretability and Learning of Interpretable Models
Weishen Pan
Changshui Zhang
FaMLXAI
101
4
0
29 May 2021
Transparent Model of Unabridged Data (TMUD)
Transparent Model of Unabridged Data (TMUD)
Jie Xu
Min Ding
107
0
0
23 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and ConceptsData mining and knowledge discovery (DMKD), 2021
Gesina Schwalbe
Bettina Finzel
XAI
443
271
0
15 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word
  Representations
The Low-Dimensional Linear Geometry of Contextualized Word RepresentationsConference on Computational Natural Language Learning (CoNLL), 2021
Evan Hernandez
Jacob Andreas
MILM
244
54
0
15 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural
  Networks
Cause and Effect: Hierarchical Concept-based Explanation of Neural NetworksIEEE International Conference on Systems, Man and Cybernetics (SMC), 2021
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
194
11
0
14 May 2021
Verification of Size Invariance in DNN Activations using Concept
  Embeddings
Verification of Size Invariance in DNN Activations using Concept EmbeddingsArtificial Intelligence Applications and Innovations (AIAI), 2021
Gesina Schwalbe
3DPC
102
10
0
14 May 2021
XAI Handbook: Towards a Unified Framework for Explainable AI
XAI Handbook: Towards a Unified Framework for Explainable AI
Sebastián M. Palacio
Adriano Lucieri
Mohsin Munir
Jörn Hees
Sheraz Ahmed
Andreas Dengel
137
40
0
14 May 2021
Boosting Light-Weight Depth Estimation Via Knowledge Distillation
Boosting Light-Weight Depth Estimation Via Knowledge DistillationKnowledge Science, Engineering and Management (KSEM), 2021
Junjie Hu
Chenyou Fan
Hualie Jiang
Xiyue Guo
Yuan Gao
Xiangyong Lu
Tin Lun Lam
217
28
0
13 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep NetworksInternational Conference on Machine Learning (ICML), 2021
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
213
96
0
11 May 2021
Rationalization through Concepts
Rationalization through ConceptsFindings (Findings), 2021
Diego Antognini
Boi Faltings
FAtt
214
24
0
11 May 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
255
72
0
05 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?AAAI Conference on Artificial Intelligence (AAAI), 2021
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAttXAI
412
155
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion AttacksIEEE International Conference on Computer Vision (ICCV), 2021
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
328
105
0
26 Apr 2021
EigenGAN: Layer-Wise Eigen-Learning for GANs
EigenGAN: Layer-Wise Eigen-Learning for GANsIEEE International Conference on Computer Vision (ICCV), 2021
Zhenliang He
Meina Kan
Shiguang Shan
GAN
213
53
0
26 Apr 2021
Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
Neural Mean Discrepancy for Efficient Out-of-Distribution DetectionComputer Vision and Pattern Recognition (CVPR), 2021
Xin Dong
Junfeng Guo
Ang Li
W. Ting
Cong Liu
H. T. Kung
OODD
337
67
0
23 Apr 2021
Previous
123...91011...151617
Next