ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01933
  4. Cited By
Explainable Neural Networks based on Additive Index Models

Explainable Neural Networks based on Additive Index Models

5 June 2018
J. Vaughan
Agus Sudjianto
Erind Brahimi
Jie Chen
V. Nair
ArXiv (abs)PDFHTML

Papers citing "Explainable Neural Networks based on Additive Index Models"

30 / 30 papers shown
Title
An AI Architecture with the Capability to Explain Recognition Results
An AI Architecture with the Capability to Explain Recognition Results
Paul Whitten
Francis Wolff
Chris Papachristou
XAI
32
1
0
13 Jun 2024
Does a Neural Network Really Encode Symbolic Concepts?
Does a Neural Network Really Encode Symbolic Concepts?
Mingjie Li
Quanshi Zhang
98
31
0
25 Feb 2023
On marginal feature attributions of tree-based models
On marginal feature attributions of tree-based models
Khashayar Filom
A. Miroshnikov
Konstandinos Kotsiopoulos
Arjun Ravi Kannan
FAtt
65
3
0
16 Feb 2023
On the explainability of quantum neural networks based on variational
  quantum circuits
On the explainability of quantum neural networks based on variational quantum circuits
Ammar Daskin
MLTFAtt
74
2
0
12 Jan 2023
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in
  Interpretable Machine Learning
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning
Danial Dervovic
Nicolas Marchesotti
Freddy Lecue
Daniele Magazzeni
69
0
0
11 Nov 2022
Deep Explainable Learning with Graph Based Data Assessing and Rule
  Reasoning
Deep Explainable Learning with Graph Based Data Assessing and Rule Reasoning
Yuanlong Li
Gaopan Huang
Min Zhou
Chuan Fu
Honglin Qiao
Yan He
71
1
0
09 Nov 2022
A Survey of Neural Trees
A Survey of Neural Trees
Haoling Li
Mingli Song
Mengqi Xue
Haofei Zhang
Jingwen Ye
Lechao Cheng
Mingli Song
AI4CE
104
6
0
07 Sep 2022
Using Model-Based Trees with Boosting to Fit Low-Order Functional ANOVA
  Models
Using Model-Based Trees with Boosting to Fit Low-Order Functional ANOVA Models
Linwei Hu
Jie Chen
V. Nair
77
3
0
14 Jul 2022
GAM(e) changer or not? An evaluation of interpretable machine learning
  models based on additive model constraints
GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints
Patrick Zschech
Sven Weinzierl
Nico Hambauer
Sandra Zilker
Mathias Kraus
151
14
0
19 Apr 2022
Semantic interpretation for convolutional neural networks: What makes a
  cat a cat?
Semantic interpretation for convolutional neural networks: What makes a cat a cat?
Haonan Xu
Yuntian Chen
Dongxiao Zhang
FAtt
61
3
0
16 Apr 2022
LocalGLMnet: interpretable deep learning for tabular data
LocalGLMnet: interpretable deep learning for tabular data
Ronald Richman
M. Wüthrich
LMTDFAtt
72
32
0
23 Jul 2021
Bias, Fairness, and Accountability with AI and ML Algorithms
Bias, Fairness, and Accountability with AI and ML Algorithms
Neng-Zhi Zhou
Zach Zhang
V. Nair
Harsh Singhal
Jie Chen
Agus Sudjianto
FaML
123
9
0
13 May 2021
Neural Networks and Denotation
Neural Networks and Denotation
E. Allen
39
0
0
15 Mar 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
208
121
0
21 Jan 2021
Towards interpreting ML-based automated malware detection models: a
  survey
Towards interpreting ML-based automated malware detection models: a survey
Yuzhou Lin
Xiaolin Chang
124
7
0
15 Jan 2021
A Comprehensive Survey of Machine Learning Applied to Radar Signal
  Processing
A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing
Ping Lang
Xiongjun Fu
M. Martorella
Jian Dong
Rui Qin
Xianpeng Meng
M. Xie
41
42
0
29 Sep 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
A. Markus
J. Kors
P. Rijnbeek
91
471
0
31 Jul 2020
Surrogate Locally-Interpretable Models with Supervised Machine Learning
  Algorithms
Surrogate Locally-Interpretable Models with Supervised Machine Learning Algorithms
Linwei Hu
Jie Chen
V. Nair
Agus Sudjianto
38
15
0
28 Jul 2020
GAMI-Net: An Explainable Neural Network based on Generalized Additive
  Models with Structured Interactions
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
Zebin Yang
Aijun Zhang
Agus Sudjianto
FAtt
170
130
0
16 Mar 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
94
317
0
08 Jan 2020
Explainable Ordinal Factorization Model: Deciphering the Effects of
  Attributes by Piece-wise Linear Approximation
Explainable Ordinal Factorization Model: Deciphering the Effects of Attributes by Piece-wise Linear Approximation
Mengzhuo Guo
Zhongzhi Xu
Qingpeng Zhang
Xiuwu Liao
Jiapeng Liu
28
0
0
14 Nov 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
77
29
0
08 Jun 2019
Enhancing Explainability of Neural Networks through Architecture
  Constraints
Enhancing Explainability of Neural Networks through Architecture Constraints
Zebin Yang
Aijun Zhang
Agus Sudjianto
AAML
52
87
0
12 Jan 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
61
54
0
08 Jan 2019
Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
Zenan Ling
Haotian Ma
Yu Yang
Robert C. Qiu
Song-Chun Zhu
Quanshi Zhang
MILM
33
3
0
08 Jan 2019
Explanatory Graphs for CNNs
Explanatory Graphs for CNNs
Quanshi Zhang
Xin Eric Wang
Ruiming Cao
Ying Nian Wu
Feng Shi
Song-Chun Zhu
FAttGNN
44
3
0
18 Dec 2018
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
62
56
0
18 Dec 2018
On the Art and Science of Machine Learning Explanations
On the Art and Science of Machine Learning Explanations
Patrick Hall
FAttXAI
92
30
0
05 Oct 2018
Model Interpretation: A Unified Derivative-based Framework for
  Nonparametric Regression and Supervised Machine Learning
Model Interpretation: A Unified Derivative-based Framework for Nonparametric Regression and Supervised Machine Learning
Xiaoyu Liu
Jie Chen
Joel Vaughan
V. Nair
Agus Sudjianto
FAtt
45
11
0
22 Aug 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
104
323
0
01 Feb 2018
1