ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00121
  4. Cited By
Interpreting CNNs via Decision Trees
v1v2 (latest)

Interpreting CNNs via Decision Trees

1 February 2018
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Interpreting CNNs via Decision Trees"

50 / 134 papers shown
Title
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
96
15
0
23 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
178
422
0
20 Jan 2022
A Cognitive Explainer for Fetal ultrasound images classifier Based on
  Medical Concepts
A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts
Ying-Shuai Wanga
Yunxia Liua
Licong Dongc
Xuzhou Wua
Huabin Zhangb
Qiongyu Yed
Desheng Sunc
Xiaobo Zhoue
Kehong Yuan
59
0
0
19 Jan 2022
Subgoal-Based Explanations for Unreliable Intelligent Decision Support
  Systems
Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems
Devleena Das
Been Kim
Sonia Chernova
63
11
0
11 Jan 2022
DeepVisualInsight: Time-Travelling Visualization for Spatio-Temporal
  Causality of Deep Classification Training
DeepVisualInsight: Time-Travelling Visualization for Spatio-Temporal Causality of Deep Classification Training
Xiangli Yang
Yun Lin
Ruofan Liu
Zhenfeng He
Chao Wang
Jinlong Dong
Hong Mei
29
5
0
31 Dec 2021
Explainable Artificial Intelligence for Autonomous Driving: A
  Comprehensive Overview and Field Guide for Future Research Directions
Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions
Shahin Atakishiyev
Mohammad Salameh
Hengshuai Yao
Randy Goebel
94
137
0
21 Dec 2021
Encoding Hierarchical Information in Neural Networks helps in
  Subpopulation Shift
Encoding Hierarchical Information in Neural Networks helps in Subpopulation Shift
Amitangshu Mukherjee
Isha Garg
Kaushik Roy
31
6
0
20 Dec 2021
Decomposing the Deep: Finding Class Specific Filters in Deep CNNs
Decomposing the Deep: Finding Class Specific Filters in Deep CNNs
Akshay Badola
Cherian Roy
V. Padmanabhan
R. Lal
FAtt
55
2
0
14 Dec 2021
Collaborative Semantic Aggregation and Calibration for Federated Domain
  Generalization
Collaborative Semantic Aggregation and Calibration for Federated Domain Generalization
Junkun Yuan
Xu Ma
Defang Chen
Leilei Gan
Lanfen Lin
Kun Kuang
FedML
115
23
0
13 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Yue Liu
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
211
382
0
04 Oct 2021
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
  Human Trust in Image Recognition Models
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models
Arjun Reddy Akula
Keze Wang
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
104
49
0
03 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image
  Classification Models: An Empirical Study
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
58
10
0
02 Sep 2021
Neural-to-Tree Policy Distillation with Policy Improvement Criterion
Neural-to-Tree Policy Distillation with Policy Improvement Criterion
Zhaorong Li
Yang Yu
Yingfeng Chen
Ke Chen
Zhipeng Hu
Changjie Fan
33
5
0
16 Aug 2021
Finding Representative Interpretations on Convolutional Neural Networks
Finding Representative Interpretations on Convolutional Neural Networks
P. C. Lam
Lingyang Chu
Maxim Torgonskiy
J. Pei
Yong Zhang
Lanjun Wang
FAttSSLHAI
70
6
0
13 Aug 2021
Towards Interpretable Deep Metric Learning with Structural Matching
Towards Interpretable Deep Metric Learning with Structural Matching
Wenliang Zhao
Yongming Rao
Ziyi Wang
Jiwen Lu
Jie Zhou
FedML
68
47
0
12 Aug 2021
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning
  Models
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
Zhenge Zhao
Panpan Xu
C. Scheidegger
Liu Ren
46
39
0
08 Aug 2021
SONG: Self-Organizing Neural Graphs
SONG: Self-Organizing Neural Graphs
Lukasz Struski
Tomasz Danel
Marek Śmieja
Jacek Tabor
Bartosz Zieliñski
19
1
0
28 Jul 2021
Explainable Diabetic Retinopathy Detection and Retinal Image Generation
Explainable Diabetic Retinopathy Detection and Retinal Image Generation
Yuhao Niu
Lin Gu
Yitian Zhao
Feng Lu
MedIm
61
58
0
01 Jul 2021
Making CNNs Interpretable by Building Dynamic Sequential Decision
  Forests with Top-down Hierarchy Learning
Making CNNs Interpretable by Building Dynamic Sequential Decision Forests with Top-down Hierarchy Learning
Yilin Wang
Shaozuo Yu
Xiaokang Yang
Wei Shen
33
1
0
05 Jun 2021
Explainable Activity Recognition for Smart Home Systems
Explainable Activity Recognition for Smart Home Systems
Devleena Das
Yasutaka Nishimura
R. Vivek
Naoto Takeda
Sean T. Fish
Thomas Ploetz
Sonia Chernova
45
43
0
20 May 2021
On Guaranteed Optimal Robust Explanations for NLP Models
On Guaranteed Optimal Robust Explanations for NLP Models
Emanuele La Malfa
A. Zbrzezny
Rhiannon Michelmore
Nicola Paoletti
Marta Z. Kwiatkowska
FAtt
77
48
0
08 May 2021
Robust Semantic Interpretability: Revisiting Concept Activation Vectors
Robust Semantic Interpretability: Revisiting Concept Activation Vectors
J. Pfau
A. Young
Jerome Wei
Maria L. Wei
Michael J. Keiser
FAtt
58
15
0
06 Apr 2021
Explainability-aided Domain Generalization for Image Classification
Explainability-aided Domain Generalization for Image Classification
Robin M. Schmidt
FAttOOD
51
1
0
05 Apr 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAMLFaMLXAIHAI
82
341
0
19 Mar 2021
Explainable Person Re-Identification with Attribute-guided Metric
  Distillation
Explainable Person Re-Identification with Attribute-guided Metric Distillation
Xiaodong Chen
Xinchen Liu
Wu Liu
Xiaoping Zhang
Yongdong Zhang
Tao Mei
108
47
0
02 Mar 2021
Exposing Semantic Segmentation Failures via Maximum Discrepancy
  Competition
Exposing Semantic Segmentation Failures via Maximum Discrepancy Competition
Jiebin Yan
Yu Zhong
Yuming Fang
Zhangyang Wang
Kede Ma
UQCV
79
18
0
27 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
109
26
0
04 Feb 2021
A Survey on Understanding, Visualizations, and Explanation of Deep
  Neural Networks
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
Atefeh Shahroudnejad
FaMLAAMLAI4CEXAI
119
36
0
02 Feb 2021
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Sandareka Wickramanayake
Wynne Hsu
Mong Li Lee
SSL
52
25
0
11 Jan 2021
Explainable AI for Robot Failures: Generating Explanations that Improve
  User Assistance in Fault Recovery
Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery
Devleena Das
Siddhartha Banerjee
Sonia Chernova
98
116
0
05 Jan 2021
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
190
270
0
03 Dec 2020
Interpretable Visual Reasoning via Induced Symbolic Space
Interpretable Visual Reasoning via Induced Symbolic Space
Zhonghao Wang
Kai Wang
Mo Yu
Jinjun Xiong
Wen-mei W. Hwu
M. Hasegawa-Johnson
Humphrey Shi
LRMOCL
56
20
0
23 Nov 2020
Explainable AI for System Failures: Generating Explanations that Improve
  Human Assistance in Fault Recovery
Explainable AI for System Failures: Generating Explanations that Improve Human Assistance in Fault Recovery
Devleena Das
Siddhartha Banerjee
Sonia Chernova
LRM
51
6
0
18 Nov 2020
A Quantitative Perspective on Values of Domain Knowledge for Machine
  Learning
A Quantitative Perspective on Values of Domain Knowledge for Machine Learning
Jianyi Yang
Shaolei Ren
FAttFaML
45
5
0
17 Nov 2020
Generalized Constraints as A New Mathematical Problem in Artificial
  Intelligence: A Review and Perspective
Generalized Constraints as A New Mathematical Problem in Artificial Intelligence: A Review and Perspective
Bao-Gang Hu
Hanbing Qu
AI4CE
105
1
0
12 Nov 2020
ERIC: Extracting Relations Inferred from Convolutions
ERIC: Extracting Relations Inferred from Convolutions
Joe Townsend
Theodoros Kasioumis
Hiroya Inakoshi
NAIFAtt
77
16
0
19 Oct 2020
Survey of explainable machine learning with visual and granular methods
  beyond quasi-explanations
Survey of explainable machine learning with visual and granular methods beyond quasi-explanations
Boris Kovalerchuk
M. Ahmad
University of Washington Tacoma
58
42
0
21 Sep 2020
Contextual Semantic Interpretability
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
122
28
0
18 Sep 2020
SCOUTER: Slot Attention-based Classifier for Explainable Image
  Recognition
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition
Liangzhi Li
Bowen Wang
Manisha Verma
Yuta Nakashima
R. Kawasaki
Hajime Nagahara
OCL
84
50
0
14 Sep 2020
Training Interpretable Convolutional Neural Networks by Differentiating
  Class-specific Filters
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters
Haoyun Liang
Zhihao Ouyang
Yuyuan Zeng
Hang Su
Zihao He
Shutao Xia
Jun Zhu
Bo Zhang
92
47
0
16 Jul 2020
Learning a functional control for high-frequency finance
Learning a functional control for high-frequency finance
Laura Leal
Mathieu Laurière
Charles-Albert Lehalle
AIFin
59
20
0
17 Jun 2020
SegNBDT: Visual Decision Rules for Segmentation
SegNBDT: Visual Decision Rules for Segmentation
Alvin Wan
Daniel Ho
You Song
Henk Tillman
Sarah Adel Bargal
Joseph E. Gonzalez
SSeg
102
6
0
11 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
110
271
0
29 May 2020
Interpreting the Latent Space of GANs via Correlation Analysis for
  Controllable Concept Manipulation
Interpreting the Latent Space of GANs via Correlation Analysis for Controllable Concept Manipulation
Ziqiang Li
Rentuo Tao
Hongjing Niu
Bin Li
GAN
38
4
0
23 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
111
379
0
30 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Helen Zhou
AAML
50
8
0
23 Apr 2020
Dendrite Net: A White-Box Module for Classification, Regression, and
  System Identification
Dendrite Net: A White-Box Module for Classification, Regression, and System Identification
Gang Liu
Junchang Wang
70
62
0
08 Apr 2020
NBDT: Neural-Backed Decision Trees
NBDT: Neural-Backed Decision Trees
Alvin Wan
Lisa Dunlap
Daniel Ho
Jihan Yin
Scott Lee
Henry Jin
Suzanne Petryk
Sarah Adel Bargal
Joseph E. Gonzalez
68
105
0
01 Apr 2020
Architecture Disentanglement for Deep Neural Networks
Architecture Disentanglement for Deep Neural Networks
Jie Hu
Liujuan Cao
QiXiang Ye
Tong Tong
Shengchuan Zhang
Ke Li
Feiyue Huang
Rongrong Ji
Ling Shao
AAML
88
18
0
30 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CMLELMXAI
98
221
0
09 Mar 2020
Previous
123
Next