ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.05796
  4. Cited By
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations

Network Dissection: Quantifying Interpretability of Deep Visual Representations

19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
    MILM
    FAtt
ArXivPDFHTML

Papers citing "Network Dissection: Quantifying Interpretability of Deep Visual Representations"

50 / 207 papers shown
Title
Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning
  of 3D Pose
Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose
Angtian Wang
Shenxiao Mei
Alan Yuille
Adam Kortylewski
3DV
13
21
0
27 Oct 2021
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
Zongze Wu
Yotam Nitzan
Eli Shechtman
Dani Lischinski
16
56
0
21 Oct 2021
Quantifying Local Specialization in Deep Neural Networks
Quantifying Local Specialization in Deep Neural Networks
Shlomi Hod
Daniel Filan
Stephen Casper
Andrew Critch
Stuart J. Russell
60
10
0
13 Oct 2021
Robust Feature-Level Adversaries are Interpretability Tools
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
40
27
0
07 Oct 2021
Explaining Convolutional Neural Networks by Tagging Filters
Explaining Convolutional Neural Networks by Tagging Filters
Anna Nguyen
Daniel Hagenmayer
T. Weller
Michael Färber
FAtt
17
0
0
20 Sep 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
  Features in Images
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
32
3
0
25 Aug 2021
Interpreting Face Inference Models using Hierarchical Network Dissection
Interpreting Face Inference Models using Hierarchical Network Dissection
Divyang Teotia
Àgata Lapedriza
Sarah Ostadabbas
CVBM
22
3
0
23 Aug 2021
COVID-view: Diagnosis of COVID-19 using Chest CT
COVID-view: Diagnosis of COVID-19 using Chest CT
Shreeraj Jadhav
Gaofeng Deng
M. Zawin
Arie Kaufman
38
18
0
09 Aug 2021
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Shuvendu Roy
Ali Etemad
32
17
0
06 Aug 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
25
12
0
01 Jul 2021
Inverting and Understanding Object Detectors
Inverting and Understanding Object Detectors
Ang Cao
Justin Johnson
ObjD
10
3
0
26 Jun 2021
Evaluation of Saliency-based Explainability Method
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAtt
XAI
21
12
0
24 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
A. Madry
38
40
0
07 Jun 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
21
184
0
15 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
20
88
0
11 May 2021
Rationalization through Concepts
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
11
19
0
11 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
22
132
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
21
82
0
26 Apr 2021
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet
  Scattering Transforms
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet Scattering Transforms
A. Saydjari
D. Finkbeiner
17
20
0
22 Apr 2021
An Interpretability Illusion for BERT
An Interpretability Illusion for BERT
Tolga Bolukbasi
Adam Pearce
Ann Yuan
Andy Coenen
Emily Reif
Fernanda Viégas
Martin Wattenberg
MILM
FAtt
24
68
0
14 Apr 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
X. Zhang
AAML
22
8
0
16 Mar 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
32
169
0
13 Jan 2021
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
16
18
0
10 Dec 2020
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
Zongze Wu
Dani Lischinski
Eli Shechtman
DRL
49
482
0
25 Nov 2020
Teaching with Commentaries
Teaching with Commentaries
Aniruddh Raghu
M. Raghu
Simon Kornblith
D. Duvenaud
Geoffrey E. Hinton
12
24
0
05 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for
  Interpretable Image Recognition
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
14
65
0
05 Nov 2020
Quantifying Learnability and Describability of Visual Concepts Emerging
  in Representation Learning
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
20
13
0
27 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
34
7
0
23 Oct 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
27
62
0
11 Sep 2020
CuratorNet: Visually-aware Recommendation of Art Images
CuratorNet: Visually-aware Recommendation of Art Images
Pablo Messina
Manuel Cartagena
Patricio Cerda
Felipe del-Rio
Denis Parra
12
14
0
09 Sep 2020
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of
  CNNs
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
18
266
0
05 Aug 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
16
71
0
03 Aug 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
  Feature Importance
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
23
42
0
21 Jul 2020
Training Interpretable Convolutional Neural Networks by Differentiating
  Class-specific Filters
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters
Haoyun Liang
Zhihao Ouyang
Yuyuan Zeng
Hang Su
Zihao He
Shutao Xia
Jun Zhu
Bo Zhang
16
47
0
16 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image
  Translation
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
20
20
0
10 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
10
66
0
26 Jun 2020
GAN Memory with No Forgetting
GAN Memory with No Forgetting
Yulai Cong
Miaoyun Zhao
Jianqiao Li
Sijia Wang
Lawrence Carin
CLL
11
116
0
13 Jun 2020
Learning to Branch for Multi-Task Learning
Learning to Branch for Multi-Task Learning
Pengsheng Guo
Chen-Yu Lee
Daniel Ulbricht
13
174
0
02 Jun 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
8
138
0
21 May 2020
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
22
22
0
02 Apr 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
35
120
0
26 Mar 2020
Foundations of Explainable Knowledge-Enabled Systems
Foundations of Explainable Knowledge-Enabled Systems
Shruthi Chari
Daniel Gruen
O. Seneviratne
D. McGuinness
31
28
0
17 Mar 2020
TIME: A Transparent, Interpretable, Model-Adaptive and Explainable
  Neural Network for Dynamic Physical Processes
TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes
Gurpreet Singh
Soumyajit Gupta
Matt Lease
Clint Dawson
AI4TS
AI4CE
6
2
0
05 Mar 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
50
33
0
03 Mar 2020
On Leveraging Pretrained GANs for Generation with Limited Data
On Leveraging Pretrained GANs for Generation with Limited Data
Miaoyun Zhao
Yulai Cong
Lawrence Carin
20
21
0
26 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Y. Zou
FAtt
TDI
17
108
0
23 Feb 2020
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for
  Sequential Decision-Making Problems with Inscrutable Representations
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations
S. Sreedharan
Utkarsh Soni
Mudit Verma
Siddharth Srivastava
S. Kambhampati
63
30
0
04 Feb 2020
Keeping Community in the Loop: Understanding Wikipedia Stakeholder
  Values for Machine Learning-Based Systems
Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems
C. E. Smith
Bowen Yu
Anjali Srivastava
Aaron L Halfaker
Loren G. Terveen
Haiyi Zhu
KELM
6
69
0
14 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
30
300
0
08 Jan 2020
TAB-VCR: Tags and Attributes based Visual Commonsense Reasoning
  Baselines
TAB-VCR: Tags and Attributes based Visual Commonsense Reasoning Baselines
Jingxiang Lin
Unnat Jain
A. Schwing
LRM
ReLM
26
9
0
31 Oct 2019
Previous
12345
Next