ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.00935
  4. Cited By
Interpretable Convolutional Neural Networks
v1v2v3v4 (latest)

Interpretable Convolutional Neural Networks

2 October 2017
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Interpretable Convolutional Neural Networks"

50 / 314 papers shown
Title
Unsupervised Interpretable Basis Extraction for Concept-Based Visual
  Explanations
Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations
Alexandros Doumanoglou
S. Asteriadis
D. Zarpalas
SSLFAtt
24
4
0
19 Mar 2023
Interpreting Hidden Semantics in the Intermediate Layers of 3D Point
  Cloud Classification Neural Network
Interpreting Hidden Semantics in the Intermediate Layers of 3D Point Cloud Classification Neural Network
Weiquan Liu
Minghao Liu
Shijun Zheng
Cheng-i Wang
3DPC
48
3
0
12 Mar 2023
A Theoretical Framework for AI Models Explainability with Application in
  Biomedicine
A Theoretical Framework for AI Models Explainability with Application in Biomedicine
Matteo Rizzo
Alberto Veneri
A. Albarelli
Claudio Lucchese
Marco Nobile
Cristina Conati
XAI
83
9
0
29 Dec 2022
On the Equivalence of the Weighted Tsetlin Machine and the Perceptron
On the Equivalence of the Weighted Tsetlin Machine and the Perceptron
Jivitesh Sharma
Ole-Christoffer Granmo
Lei Jiao
56
1
0
27 Dec 2022
Bort: Towards Explainable Neural Networks with Bounded Orthogonal
  Constraint
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
88
7
0
18 Dec 2022
State-Regularized Recurrent Neural Networks to Extract Automata and
  Explain Predictions
State-Regularized Recurrent Neural Networks to Extract Automata and Explain Predictions
Cheng Wang
Carolin (Haas) Lawrence
Mathias Niepert
69
3
0
10 Dec 2022
ResNet Structure Simplification with the Convolutional Kernel Redundancy
  Measure
ResNet Structure Simplification with the Convolutional Kernel Redundancy Measure
Hongzhi Zhu
R. Rohling
Septimiu Salcudean
21
0
0
01 Dec 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
63
9
0
29 Nov 2022
OCTET: Object-aware Counterfactual Explanations
OCTET: Object-aware Counterfactual Explanations
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
AAML
104
29
0
22 Nov 2022
Towards Human-Interpretable Prototypes for Visual Assessment of Image
  Classification Models
Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
Poulami Sinhamahapatra
Lena Heidemann
Maureen Monnet
Karsten Roscher
80
5
0
22 Nov 2022
ATCON: Attention Consistency for Vision Models
ATCON: Attention Consistency for Vision Models
Ali Mirzazadeh
Florian Dubost
M. Pike
Krish Maniar
Max Zuo
Christopher Lee-Messer
D. Rubin
28
1
0
18 Oct 2022
A.I. Robustness: a Human-Centered Perspective on Technological
  Challenges and Opportunities
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti
Lorenzo Corti
Agathe Balayn
Mireia Yurrita
Philip Lippmann
Marco Brambilla
Jie Yang
84
14
0
17 Oct 2022
Interpreting Neural Policies with Disentangled Tree Representations
Interpreting Neural Policies with Disentangled Tree Representations
Tsun-Hsuan Wang
Wei Xiao
Tim Seyde
Ramin Hasani
Daniela Rus
DRL
111
2
0
13 Oct 2022
ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain
  Few-Shot Learning
ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning
Yu Fu
Yu Xie
Yanwei Fu
Jingjing Chen
Yu-Gang Jiang
77
16
0
11 Oct 2022
TCNL: Transparent and Controllable Network Learning Via Embedding
  Human-Guided Concepts
TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts
Zhihao Wang
Chuang Zhu
44
1
0
07 Oct 2022
Entropy-driven Unsupervised Keypoint Representation Learning in Videos
Entropy-driven Unsupervised Keypoint Representation Learning in Videos
A. Younes
Simone Schaub-Meyer
Georgia Chalvatzaki
SSL
107
0
0
30 Sep 2022
Gait Recognition in the Wild with Multi-hop Temporal Switch
Gait Recognition in the Wild with Multi-hop Temporal Switch
Jinkai Zheng
Xinchen Liu
Xiaoyan Gu
Yaoqi Sun
Chuang Gan
Jiyong Zhang
Wu Liu
C. Yan
CVBM
76
35
0
01 Sep 2022
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers
  for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Mengqi Xue
Qihan Huang
Haofei Zhang
Lechao Cheng
Mingli Song
Ming-hui Wu
Mingli Song
ViT
109
56
0
22 Aug 2022
E Pluribus Unum Interpretable Convolutional Neural Networks
E Pluribus Unum Interpretable Convolutional Neural Networks
George Dimas
Eirini Cholopoulou
D. Iakovidis
66
3
0
10 Aug 2022
Statistical Attention Localization (SAL): Methodology and Application to
  Object Classification
Statistical Attention Localization (SAL): Methodology and Application to Object Classification
Yijing Yang
Vasileios Magoulianitis
Xinyu Wang
C.-C. Jay Kuo
52
1
0
03 Aug 2022
Generalizable multi-task, multi-domain deep segmentation of sparse
  pediatric imaging datasets via multi-scale contrastive regularization and
  multi-joint anatomical priors
Generalizable multi-task, multi-domain deep segmentation of sparse pediatric imaging datasets via multi-scale contrastive regularization and multi-joint anatomical priors
Arnaud Boutillon
Pierre-Henri Conze
C. Pons
Valérie Burdin
Bhushan S Borotikar
50
18
0
27 Jul 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAMLAI4CE
128
134
0
27 Jul 2022
Policy Optimization with Sparse Global Contrastive Explanations
Policy Optimization with Sparse Global Contrastive Explanations
Jiayu Yao
S. Parbhoo
Weiwei Pan
Finale Doshi-Velez
OffRL
48
2
0
13 Jul 2022
A multi-level interpretable sleep stage scoring system by infusing
  experts' knowledge into a deep network architecture
A multi-level interpretable sleep stage scoring system by infusing experts' knowledge into a deep network architecture
H. Niknazar
S. Mednick
46
5
0
11 Jul 2022
Activation Template Matching Loss for Explainable Face Recognition
Activation Template Matching Loss for Explainable Face Recognition
Huawei Lin
Haozhe Liu
Qiufu Li
Linlin Shen
CVBM
73
2
0
05 Jul 2022
TopicFM: Robust and Interpretable Topic-Assisted Feature Matching
TopicFM: Robust and Interpretable Topic-Assisted Feature Matching
Khang Truong Giang
Soohwan Song
Sung-Guk Jo
75
42
0
01 Jul 2022
Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability
Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability
Svetlana Pavlitska
Christian Hubschneider
Lukas Struppek
J. Marius Zöllner
MoE
64
12
0
22 Apr 2022
Interventional Multi-Instance Learning with Deconfounded Instance-Level
  Prediction
Interventional Multi-Instance Learning with Deconfounded Instance-Level Prediction
Tiancheng Lin
Hongteng Xu
Canqian Yang
Yi Xu
74
25
0
20 Apr 2022
Semantic interpretation for convolutional neural networks: What makes a
  cat a cat?
Semantic interpretation for convolutional neural networks: What makes a cat a cat?
Haonan Xu
Yuntian Chen
Dongxiao Zhang
FAtt
61
3
0
16 Apr 2022
Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic
  Filter Attention
Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention
Yu Yang
Seung Wook Kim
Jungseock Joo
FAtt
61
17
0
10 Apr 2022
Robust and Explainable Autoencoders for Unsupervised Time Series Outlier
  Detection---Extended Version
Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection---Extended Version
Tung Kieu
B. Yang
Chenjuan Guo
Christian S. Jensen
Yan Zhao
Feiteng Huang
Kai Zheng
AI4TS
61
39
0
07 Apr 2022
AutoProtoNet: Interpretability for Prototypical Networks
AutoProtoNet: Interpretability for Prototypical Networks
Pedro Sandoval Segura
W. Lawson
30
2
0
02 Apr 2022
Diffusion Models for Counterfactual Explanations
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
118
59
0
29 Mar 2022
Attributable Visual Similarity Learning
Attributable Visual Similarity Learning
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
69
18
0
28 Mar 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
70
28
0
25 Mar 2022
Explaining, Evaluating and Enhancing Neural Networks' Learned
  Representations
Explaining, Evaluating and Enhancing Neural Networks' Learned Representations
Marco Bertolini
Djork-Arné Clevert
F. Montanari
FAtt
49
5
0
18 Feb 2022
A Lightweight, Efficient and Explainable-by-Design Convolutional Neural
  Network for Internet Traffic Classification
A Lightweight, Efficient and Explainable-by-Design Convolutional Neural Network for Internet Traffic Classification
Kevin Fauvel
Fuxing Chen
Dario Rossi
137
26
0
11 Feb 2022
Towards Disentangling Information Paths with Coded ResNeXt
Towards Disentangling Information Paths with Coded ResNeXt
Apostolos Avranas
Marios Kountouris
FAtt
44
1
0
10 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
54
1
0
30 Jan 2022
LAP: An Attention-Based Module for Concept Based Self-Interpretation and
  Knowledge Injection in Convolutional Neural Networks
LAP: An Attention-Based Module for Concept Based Self-Interpretation and Knowledge Injection in Convolutional Neural Networks
Rassa Ghavami Modegh
Ahmadali Salimi
Alireza Dizaji
Hamid R. Rabiee
FAtt
67
0
0
27 Jan 2022
Attention cannot be an Explanation
Attention cannot be an Explanation
Arjun Reddy Akula
Song-Chun Zhu
FAttXAI
109
6
0
26 Jan 2022
Learning Two-Step Hybrid Policy for Graph-Based Interpretable
  Reinforcement Learning
Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning
Tongzhou Mu
Kaixiang Lin
Fei Niu
Govind Thattai
OffRL
86
0
0
21 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
178
423
0
20 Jan 2022
Disentangled Latent Transformer for Interpretable Monocular Height
  Estimation
Disentangled Latent Transformer for Interpretable Monocular Height Estimation
Zhitong Xiong Sining Chen
Sining Chen
Yilei Shi
Xiaoxiang Zhu
ViT
60
6
0
17 Jan 2022
Effective Representation to Capture Collaboration Behaviors between
  Explainer and User
Effective Representation to Capture Collaboration Behaviors between Explainer and User
Arjun Reddy Akula
Song-Chun Zhu
115
4
0
10 Jan 2022
Scope and Sense of Explainability for AI-Systems
Scope and Sense of Explainability for AI-Systems
Anastasia-Maria Leventi-Peetz
T. Östreich
Werner Lennartz
Kai Weber
79
5
0
20 Dec 2021
Learning Interpretable Models Through Multi-Objective Neural
  Architecture Search
Learning Interpretable Models Through Multi-Objective Neural Architecture Search
Zachariah Carmichael
Tim Moon
S. A. Jacobs
AI4CE
48
9
0
16 Dec 2021
Decomposing the Deep: Finding Class Specific Filters in Deep CNNs
Decomposing the Deep: Finding Class Specific Filters in Deep CNNs
Akshay Badola
Cherian Roy
V. Padmanabhan
R. Lal
FAtt
55
2
0
14 Dec 2021
Towards Explainable Artificial Intelligence in Banking and Financial
  Services
Towards Explainable Artificial Intelligence in Banking and Financial Services
Ambreen Hanif
54
11
0
14 Dec 2021
Applications of Explainable AI for 6G: Technical Aspects, Use Cases, and
  Research Challenges
Applications of Explainable AI for 6G: Technical Aspects, Use Cases, and Research Challenges
Shen Wang
M. Qureshi
Luis Miralles-Pechuán
Thien Huynh-The
Thippa Reddy Gadekallu
Madhusanka Liyanage
65
24
0
09 Dec 2021
Previous
1234567
Next