ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.04977
  4. Cited By
Detecting Statistical Interactions from Neural Network Weights
v1v2v3v4 (latest)

Detecting Statistical Interactions from Neural Network Weights

14 May 2017
Michael Tsang
Dehua Cheng
Yan Liu
ArXiv (abs)PDFHTML

Papers citing "Detecting Statistical Interactions from Neural Network Weights"

50 / 53 papers shown
Title
Interpretable Retinal Disease Prediction Using Biology-Informed Heterogeneous Graph Representations
Interpretable Retinal Disease Prediction Using Biology-Informed Heterogeneous Graph Representations
Laurin Lux
Alexander H. Berger
Maria Romeo Tricas
Alaa E. Fayed
Siyang Song
Linus Kreitner
Jonas Weidner
Martin J. Menten
Daniel Rueckert
Johannes C. Paetzold
84
2
0
23 Feb 2025
Error-controlled non-additive interaction discovery in machine learning models
Error-controlled non-additive interaction discovery in machine learning models
Winston Chen
Yifan Jiang
William Stafford Noble
Yang Young Lu
136
1
0
17 Feb 2025
KernelSHAP-IQ: Weighted Least-Square Optimization for Shapley
  Interactions
KernelSHAP-IQ: Weighted Least-Square Optimization for Shapley Interactions
Fabian Fumagalli
Maximilian Muschalik
Patrick Kolpaczki
Eyke Hüllermeier
Barbara Hammer
103
7
0
17 May 2024
RanPAC: Random Projections and Pre-trained Models for Continual Learning
RanPAC: Random Projections and Pre-trained Models for Continual Learning
Mark D Mcdonnell
Dong Gong
Amin Parvaneh
Ehsan Abbasnejad
Anton Van Den Hengel
VLMCLL
120
112
0
05 Jul 2023
Improving Neural Additive Models with Bayesian Principles
Improving Neural Additive Models with Bayesian Principles
Kouroche Bouchiat
Alexander Immer
Hugo Yèche
Gunnar Rätsch
Vincent Fortuin
BDLMedIm
105
6
0
26 May 2023
Exploring the cloud of feature interaction scores in a Rashomon set
Exploring the cloud of feature interaction scores in a Rashomon set
Sichao Li
Rong Wang
Quanling Deng
Amanda S. Barnard
59
5
0
17 May 2023
Explaining black box text modules in natural language with language
  models
Explaining black box text modules in natural language with language models
Chandan Singh
Aliyah R. Hsu
Richard Antonello
Shailee Jain
Alexander G. Huth
Bin Yu
Jianfeng Gao
MILM
74
58
0
17 May 2023
Explanations of Black-Box Models based on Directional Feature
  Interactions
Explanations of Black-Box Models based on Directional Feature Interactions
A. Masoomi
Davin Hill
Zhonghui Xu
C. Hersh
E. Silverman
P. Castaldi
Stratis Ioannidis
Jennifer Dy
FAtt
90
19
0
16 Apr 2023
Detection of Interacting Variables for Generalized Linear Models via
  Neural Networks
Detection of Interacting Variables for Generalized Linear Models via Neural Networks
Y. Havrylenko
Julia I Heger
94
2
0
16 Sep 2022
Discovering and Explaining the Representation Bottleneck of Graph Neural
  Networks from Multi-order Interactions
Discovering and Explaining the Representation Bottleneck of Graph Neural Networks from Multi-order Interactions
Fang Wu
Siyuan Li
Lirong Wu
Dragomir R. Radev
Stan Z. Li
88
3
0
15 May 2022
Faith-Shap: The Faithful Shapley Interaction Index
Faith-Shap: The Faithful Shapley Interaction Index
Che-Ping Tsai
Chih-Kuan Yeh
Pradeep Ravikumar
TDI
95
55
0
02 Mar 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
95
66
0
21 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
120
58
0
05 Dec 2021
Discovering and Explaining the Representation Bottleneck of DNNs
Discovering and Explaining the Representation Bottleneck of DNNs
Huiqi Deng
Qihan Ren
Hao Zhang
Quanshi Zhang
124
61
0
11 Nov 2021
Interpreting Attributions and Interactions of Adversarial Attacks
Interpreting Attributions and Interactions of Adversarial Attacks
Xin Eric Wang
Shuyu Lin
Hao Zhang
Yufei Zhu
Quanshi Zhang
AAMLFAtt
61
15
0
16 Aug 2021
Interpreting and improving deep-learning models with reality checks
Interpreting and improving deep-learning models with reality checks
Chandan Singh
Wooseok Ha
Bin Yu
FAtt
84
3
0
16 Aug 2021
Alzheimer's Disease Diagnosis via Deep Factorization Machine Models
Alzheimer's Disease Diagnosis via Deep Factorization Machine Models
Raphael Ronge
K. Nho
Christian Wachinger
Sebastian Polsterl
20
4
0
12 Aug 2021
Explainable artificial intelligence (XAI) in deep learning-based medical
  image analysis
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
Bas H. M. van der Velden
Hugo J. Kuijf
K. Gilhuijs
M. Viergever
XAI
106
676
0
22 Jul 2021
Online Interaction Detection for Click-Through Rate Prediction
Online Interaction Detection for Click-Through Rate Prediction
Qiuqiang Lin
Chuanhou Gao
52
0
0
27 Jun 2021
NODE-GAM: Neural Generalized Additive Model for Interpretable Deep
  Learning
NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning
C. Chang
R. Caruana
Anna Goldenberg
AI4CE
93
80
0
03 Jun 2021
A Unified Game-Theoretic Interpretation of Adversarial Robustness
A Unified Game-Theoretic Interpretation of Adversarial Robustness
Jie Ren
Die Zhang
Yisen Wang
Lu Chen
Zhanpeng Zhou
...
Xu Cheng
Xin Eric Wang
Meng Zhou
Jie Shi
Quanshi Zhang
AAML
136
23
0
12 Mar 2021
Relate and Predict: Structure-Aware Prediction with Jointly Optimized
  Neural DAG
Relate and Predict: Structure-Aware Prediction with Jointly Optimized Neural DAG
Arshdeep Sekhon
Zhe Wang
Yanjun Qi
GNN
28
0
0
03 Mar 2021
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
  Models on MIMIC-IV Dataset
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset
Chuizheng Meng
Loc Trinh
Nan Xu
Yan Liu
67
30
0
12 Feb 2021
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep
  Learning
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
55
11
0
21 Dec 2020
Enforcing Interpretability and its Statistical Impacts: Trade-offs
  between Accuracy and Interpretability
Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
Gintare Karolina Dziugaite
Shai Ben-David
Daniel M. Roy
FaML
44
40
0
26 Oct 2020
Towards Interaction Detection Using Topological Analysis on Neural
  Networks
Towards Interaction Detection Using Topological Analysis on Neural Networks
Zirui Liu
Qingquan Song
Kaixiong Zhou
Ting-Hsiang Wang
Ying Shan
Helen Zhou
32
6
0
25 Oct 2020
A Unified Approach to Interpreting and Boosting Adversarial
  Transferability
A Unified Approach to Interpreting and Boosting Adversarial Transferability
Xin Eric Wang
Jie Ren
Shuyu Lin
Xiangming Zhu
Yisen Wang
Quanshi Zhang
AAML
143
96
0
08 Oct 2020
Interpreting and Boosting Dropout from a Game-Theoretic View
Interpreting and Boosting Dropout from a Game-Theoretic View
Hao Zhang
Sen Li
Yinchao Ma
Mingjie Li
Yichen Xie
Quanshi Zhang
FAttAI4CE
92
48
0
24 Sep 2020
Feature Interaction Interpretability: A Case for Explaining
  Ad-Recommendation Systems via Neural Interaction Detection
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Michael Tsang
Dehua Cheng
Hanpeng Liu
Xuening Feng
Eric Zhou
Yan Liu
FAtt
85
60
0
19 Jun 2020
How does this interaction affect me? Interpretable attribution for
  feature interactions
How does this interaction affect me? Interpretable attribution for feature interactions
Michael Tsang
Sirisha Rambhatla
Yan Liu
FAtt
75
88
0
19 Jun 2020
High Dimensional Model Explanations: an Axiomatic Approach
High Dimensional Model Explanations: an Axiomatic Approach
Neel Patel
Martin Strobel
Yair Zick
FAtt
53
20
0
16 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
109
223
0
05 Jun 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
141
83
0
17 Mar 2020
GAMI-Net: An Explainable Neural Network based on Generalized Additive
  Models with Structured Interactions
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
Zebin Yang
Aijun Zhang
Agus Sudjianto
FAtt
170
130
0
16 Mar 2020
Building and Interpreting Deep Similarity Models
Building and Interpreting Deep Similarity Models
Oliver Eberle
Jochen Büttner
Florian Kräutli
K. Müller
Matteo Valleriani
G. Montavon
73
59
0
11 Mar 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
85
149
0
10 Feb 2020
An interpretable neural network model through piecewise linear
  approximation
An interpretable neural network model through piecewise linear approximation
Mengzhuo Guo
Qingpeng Zhang
Xiuwu Liao
D. Zeng
MILMFAtt
51
8
0
20 Jan 2020
Purifying Interaction Effects with the Functional ANOVA: An Efficient
  Algorithm for Recovering Identifiable Additive Models
Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Benjamin J. Lengerich
S. Tan
C. Chang
Giles Hooker
R. Caruana
79
42
0
12 Nov 2019
Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural
  Networks and Neural Architecture Search
Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search
Mehmet Süzen
J. Cerdà
C. Weber
40
1
0
10 Nov 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAttCML
122
211
0
27 Oct 2019
Neural Memory Plasticity for Anomaly Detection
Neural Memory Plasticity for Anomaly Detection
Tharindu Fernando
Simon Denman
David Ahmedt-Aristizabal
Sridha Sridharan
K. Laurens
Patrick J. Johnston
Clinton Fookes
58
5
0
12 Oct 2019
Explainable Machine Learning for Scientific Insights and Discoveries
Explainable Machine Learning for Scientific Insights and Discoveries
R. Roscher
B. Bohn
Marco F. Duarte
Jochen Garcke
XAI
120
677
0
21 May 2019
Disentangled Attribution Curves for Interpreting Random Forests and
  Boosted Trees
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Summer Devlin
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
56
14
0
18 May 2019
Hybrid Predictive Model: When an Interpretable Model Collaborates with a
  Black-box Model
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model
Tong Wang
Qihang Lin
139
19
0
10 May 2019
Explaining a prediction in some nonlinear models
Cosimo Izzo
FAtt
32
0
0
21 Apr 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin Yu
XAIHAI
211
1,457
0
14 Jan 2019
Neural Persistence: A Complexity Measure for Deep Neural Networks Using
  Algebraic Topology
Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology
Bastian Rieck
Matteo Togninalli
Christian Bock
Michael Moor
Max Horn
Thomas Gumbsch
Karsten Borgwardt
93
111
0
23 Dec 2018
Can I trust you more? Model-Agnostic Hierarchical Explanations
Can I trust you more? Model-Agnostic Hierarchical Explanations
Michael Tsang
Youbang Sun
Dongxu Ren
Yan Liu
FAtt
53
26
0
12 Dec 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
84
146
0
14 Jun 2018
Building Bayesian Neural Networks with Blocks: On Structure,
  Interpretability and Uncertainty
Building Bayesian Neural Networks with Blocks: On Structure, Interpretability and Uncertainty
Hao Zhou
Yunyang Xiong
Vikas Singh
UQCVBDL
85
4
0
10 Jun 2018
12
Next