ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.00935
  4. Cited By
Interpretable Convolutional Neural Networks
v1v2v3v4 (latest)

Interpretable Convolutional Neural Networks

2 October 2017
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Interpretable Convolutional Neural Networks"

50 / 314 papers shown
Title
Training Deep Models to be Explained with Fewer Examples
Training Deep Models to be Explained with Fewer Examples
Tomoharu Iwata
Yuya Yoshikawa
FAtt
99
1
0
07 Dec 2021
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
79
43
0
17 Nov 2021
Impact of loss functions on the performance of a deep neural network
  designed to restore low-dose digital mammography
Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography
Hongming Shan
R. Vimieiro
L. Borges
M. Vieira
Ge Wang
MedIm
61
11
0
12 Nov 2021
ODMTCNet: An Interpretable Multi-view Deep Neural Network Architecture
  for Image Feature Representation
ODMTCNet: An Interpretable Multi-view Deep Neural Network Architecture for Image Feature Representation
Lei Gao
Zheng Guo
L. Guan
69
3
0
28 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
133
12
0
11 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Yue Liu
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
213
383
0
04 Oct 2021
Learning Interpretable Concept Groups in CNNs
Learning Interpretable Concept Groups in CNNs
Saurabh Varshneya
Antoine Ledent
Robert A. Vandermeulen
Yunwen Lei
Matthias Enders
Damian Borth
Marius Kloft
103
6
0
21 Sep 2021
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
  Human Trust in Image Recognition Models
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models
Arjun Reddy Akula
Keze Wang
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
104
49
0
03 Sep 2021
Spatio-Temporal Perturbations for Video Attribution
Spatio-Temporal Perturbations for Video Attribution
Zhenqiang Li
Weimin Wang
Zuoyue Li
Yifei Huang
Yoichi Sato
60
6
0
01 Sep 2021
This looks more like that: Enhancing Self-Explaining Models by
  Prototypical Relevance Propagation
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
68
49
0
27 Aug 2021
Towards Interpretable Deep Metric Learning with Structural Matching
Towards Interpretable Deep Metric Learning with Structural Matching
Wenliang Zhao
Yongming Rao
Ziyi Wang
Jiwen Lu
Jie Zhou
FedML
68
47
0
12 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
77
17
0
11 Aug 2021
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning
  Models
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
Zhenge Zhao
Panpan Xu
C. Scheidegger
Liu Ren
48
39
0
08 Aug 2021
Mixture of Linear Models Co-supervised by Deep Neural Networks
Mixture of Linear Models Co-supervised by Deep Neural Networks
Beomseok Seo
Lin Lin
Jia Li
37
6
0
05 Aug 2021
Dynamic Neural Network Architectural and Topological Adaptation and
  Related Methods -- A Survey
Dynamic Neural Network Architectural and Topological Adaptation and Related Methods -- A Survey
Lorenz Kummer
AI4CE
68
0
0
28 Jul 2021
Interpretable Compositional Convolutional Neural Networks
Interpretable Compositional Convolutional Neural Networks
Wen Shen
Zhihua Wei
Shikun Huang
Binbin Zhang
Jiaqi Fan
Ping Zhao
Quanshi Zhang
FAtt
70
36
0
09 Jul 2021
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILMAAML
97
13
0
05 Jul 2021
Prediction of Hereditary Cancers Using Neural Networks
Prediction of Hereditary Cancers Using Neural Networks
Zoe Guan
Giovanni Parmigiani
D. Braun
L. Trippa
MedIm
96
0
0
25 Jun 2021
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Sandareka Wickramanayake
Wynne Hsu
Mong Li Lee
FaMLAI4CE
41
3
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
Willie Neiswanger
115
67
0
23 Jun 2021
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
Xu Cheng
Chuntung Chu
Yi Zheng
Jie Ren
Quanshi Zhang
47
22
0
21 Jun 2021
Thinking Like Transformers
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
131
135
0
13 Jun 2021
Drop Clause: Enhancing Performance, Interpretability and Robustness of
  the Tsetlin Machine
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine
Jivitesh Sharma
Rohan Kumar Yadav
Ole-Christoffer Granmo
Lei Jiao
VLM
58
12
0
30 May 2021
How to Explain Neural Networks: an Approximation Perspective
How to Explain Neural Networks: an Approximation Perspective
Hangcheng Dong
Bingguo Liu
Fengdong Chen
Dong Ye
Guodong Liu
FAtt
46
1
0
17 May 2021
Improving Molecular Graph Neural Network Explainability with
  Orthonormalization and Induced Sparsity
Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity
Ryan Henderson
Djork-Arné Clevert
F. Montanari
91
27
0
11 May 2021
Carrying out CNN Channel Pruning in a White Box
Carrying out CNN Channel Pruning in a White Box
Yuxin Zhang
Mingbao Lin
Chia-Wen Lin
Jie Chen
Feiyue Huang
Yongjian Wu
Yonghong Tian
Rongrong Ji
VLM
109
61
0
24 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
88
6
0
19 Apr 2021
An Overview of Human Activity Recognition Using Wearable Sensors:
  Healthcare and Artificial Intelligence
An Overview of Human Activity Recognition Using Wearable Sensors: Healthcare and Artificial Intelligence
Rex Liu
Albara Ah Ramli
Huan Zhang
Esha Datta
Xin Liu
55
48
0
29 Mar 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaMLAI4CELRM
240
677
0
20 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAMLFaMLXAIHAI
82
341
0
19 Mar 2021
Danish Fungi 2020 -- Not Just Another Image Recognition Dataset
Danish Fungi 2020 -- Not Just Another Image Recognition Dataset
Lukás Picek
Milan Šulc
Jirí Matas
J. Heilmann‐Clausen
T. Jeppesen
T. Læssøe
T. Frøslev
99
56
0
18 Mar 2021
Quantitative Performance Assessment of CNN Units via Topological Entropy
  Calculation
Quantitative Performance Assessment of CNN Units via Topological Entropy Calculation
Yang Zhao
Hao Zhang
77
7
0
17 Mar 2021
Unveiling the Potential of Structure Preserving for Weakly Supervised
  Object Localization
Unveiling the Potential of Structure Preserving for Weakly Supervised Object Localization
Xingjia Pan
Yingguo Gao
Zhiwen Lin
Fan Tang
Weiming Dong
Haolei Yuan
Feiyue Huang
Changsheng Xu
WSOL
89
87
0
08 Mar 2021
CoDeGAN: Contrastive Disentanglement for Generative Adversarial Network
CoDeGAN: Contrastive Disentanglement for Generative Adversarial Network
Lili Pan
Peijun Tang
Zhiyong Chen
Zenglin Xu
GANDRL
64
5
0
05 Mar 2021
Human-Understandable Decision Making for Visual Recognition
Human-Understandable Decision Making for Visual Recognition
Xiaowei Zhou
Jie Yin
Ivor Tsang
Chen Wang
FAttHAI
54
1
0
05 Mar 2021
Deep learning based electrical noise removal enables high spectral
  optoacoustic contrast in deep tissue
Deep learning based electrical noise removal enables high spectral optoacoustic contrast in deep tissue
C. Dehner
Ivan Olefir
K. Chowdhury
D. Jüstel
V. Ntziachristos
73
28
0
24 Feb 2021
VitrAI -- Applying Explainable AI in the Real World
VitrAI -- Applying Explainable AI in the Real World
Marc Hanussek
Falko Kötter
Maximilien Kintz
Jens Drawehn
37
2
0
12 Feb 2021
PatchX: Explaining Deep Models by Intelligible Pattern Patches for
  Time-series Classification
PatchX: Explaining Deep Models by Intelligible Pattern Patches for Time-series Classification
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
AI4TS
44
5
0
11 Feb 2021
HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep
  Neural Networks
HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks
Yuanyuan Chen
Boyang Albert Li
Han Yu
Pengcheng Wu
Chunyan Miao
TDI
94
42
0
04 Feb 2021
A Survey on Understanding, Visualizations, and Explanation of Deep
  Neural Networks
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
Atefeh Shahroudnejad
FaMLAAMLAI4CEXAI
119
36
0
02 Feb 2021
Explaining Natural Language Processing Classifiers with Occlusion and
  Language Modeling
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
51
2
0
28 Jan 2021
CORL: Compositional Representation Learning for Few-Shot Classification
CORL: Compositional Representation Learning for Few-Shot Classification
Ju He
Adam Kortylewski
Alan Yuille
OCL
64
10
0
28 Jan 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
96
135
0
24 Jan 2021
i-Algebra: Towards Interactive Interpretability of Deep Neural Networks
i-Algebra: Towards Interactive Interpretability of Deep Neural Networks
Xinyang Zhang
Ren Pang
S. Ji
Fenglong Ma
Ting Wang
HAIAI4CE
38
5
0
22 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
186
177
0
13 Jan 2021
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Sandareka Wickramanayake
Wynne Hsu
Mong Li Lee
SSL
52
25
0
11 Jan 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaMLXAI
209
688
0
28 Dec 2020
Image Translation via Fine-grained Knowledge Transfer
Image Translation via Fine-grained Knowledge Transfer
Xuanhong Chen
Ziang Liu
Ting Qiu
Bingbing Ni
Naiyuan Liu
Xiwei Hu
Yuhan Li
30
0
0
21 Dec 2020
MA-Unet: An improved version of Unet based on multi-scale and attention
  mechanism for medical image segmentation
MA-Unet: An improved version of Unet based on multi-scale and attention mechanism for medical image segmentation
Yutong Cai
Yong Wang
SSeg
83
74
0
20 Dec 2020
Rule Extraction from Binary Neural Networks with Convolutional Rules for
  Model Validation
Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation
Sophie Burkhardt
Jannis Brugger
Nicolas Wagner
Zahra Ahmadi
Kristian Kersting
Stefan Kramer
NAIFAtt
83
8
0
15 Dec 2020
Previous
1234567
Next