ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.00825
  4. Cited By
Layer-wise Relevance Propagation for Neural Networks with Local
  Renormalization Layers

Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers

4 April 2016
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
Wojciech Samek
    FAtt
ArXivPDFHTML

Papers citing "Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers"

34 / 84 papers shown
Title
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision
  Transformers
IA-RED2^22: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Yikang Shen
Yi Ding
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
39
153
0
23 Jun 2021
FairCanary: Rapid Continuous Explainable Fairness
FairCanary: Rapid Continuous Explainable Fairness
Avijit Ghosh
Aalok Shanbhag
Christo Wilson
11
20
0
13 Jun 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural Networks
Atticus Geiger
Hanson Lu
Thomas Icard
Christopher Potts
NAI
CML
17
222
0
06 Jun 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
22
28
0
21 May 2021
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Thorben Funke
Megha Khosla
Mandeep Rathee
Avishek Anand
FAtt
23
38
0
18 May 2021
Bias, Fairness, and Accountability with AI and ML Algorithms
Bias, Fairness, and Accountability with AI and ML Algorithms
Neng-Zhi Zhou
Zach Zhang
V. Nair
Harsh Singhal
Jie Chen
Agus Sudjianto
FaML
21
9
0
13 May 2021
Knowledge Neurons in Pretrained Transformers
Knowledge Neurons in Pretrained Transformers
Damai Dai
Li Dong
Y. Hao
Zhifang Sui
Baobao Chang
Furu Wei
KELM
MU
28
418
0
18 Apr 2021
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Thomas Rojat
Raphael Puget
David Filliat
Javier Del Ser
R. Gelin
Natalia Díaz Rodríguez
XAI
AI4TS
44
128
0
02 Apr 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
35
25
0
20 Mar 2021
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
45
644
0
17 Dec 2020
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Jiaxuan Wang
Jenna Wiens
Scott M. Lundberg
FAtt
25
88
0
27 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
SHAP values for Explaining CNN-based Text Classification Models
SHAP values for Explaining CNN-based Text Classification Models
Wei Zhao
Tarun Joshi
V. Nair
Agus Sudjianto
FAtt
28
36
0
26 Aug 2020
Sequential Explanations with Mental Model-Based Policies
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
31
15
0
17 Jul 2020
The Penalty Imposed by Ablated Data Augmentation
The Penalty Imposed by Ablated Data Augmentation
Frederick Liu
A. Najmi
Mukund Sundararajan
31
6
0
08 Jun 2020
Attribution in Scale and Space
Attribution in Scale and Space
Shawn Xu
Subhashini Venugopalan
Mukund Sundararajan
FAtt
BDL
14
71
0
03 Apr 2020
Self-Supervised Discovering of Interpretable Features for Reinforcement
  Learning
Self-Supervised Discovering of Interpretable Features for Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Zhuoyuan Wang
Tingyu Lin
Cheng Wu
SSL
28
18
0
16 Mar 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAtt
TDI
25
108
0
23 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Q-value Path Decomposition for Deep Multiagent Reinforcement Learning
Q-value Path Decomposition for Deep Multiagent Reinforcement Learning
Yaodong Yang
Jianye Hao
Guangyong Chen
Hongyao Tang
Yingfeng Chen
Yujing Hu
Changjie Fan
Zhongyu Wei
23
52
0
10 Feb 2020
Feature relevance quantification in explainable AI: A causal problem
Feature relevance quantification in explainable AI: A causal problem
Dominik Janzing
Lenon Minorics
Patrick Blobaum
FAtt
CML
15
278
0
29 Oct 2019
Interpreting Undesirable Pixels for Image Classification on Black-Box
  Models
Interpreting Undesirable Pixels for Image Classification on Black-Box Models
Sin-Han Kang
Hong G Jung
Seong-Whan Lee
FAtt
19
3
0
27 Sep 2019
Improving performance of deep learning models with axiomatic attribution
  priors and expected gradients
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G. Erion
Joseph D. Janizek
Pascal Sturmfels
Scott M. Lundberg
Su-In Lee
OOD
BDL
FAtt
21
80
0
25 Jun 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
38
11
0
09 Apr 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
17
996
0
26 Feb 2019
DeepPINK: reproducible feature selection in deep neural networks
DeepPINK: reproducible feature selection in deep neural networks
Yang Young Lu
Yingying Fan
Jinchi Lv
William Stafford Noble
FAtt
27
124
0
04 Sep 2018
Shedding Light on Black Box Machine Learning Algorithms: Development of
  an Axiomatic Framework to Assess the Quality of Methods that Explain
  Individual Predictions
Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions
Milo Honegger
19
35
0
15 Aug 2018
A Note about: Local Explanation Methods for Deep Neural Networks lack
  Sensitivity to Parameter Values
A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values
Mukund Sundararajan
Ankur Taly
FAtt
19
21
0
11 Jun 2018
How Important Is a Neuron?
How Important Is a Neuron?
Kedar Dhamdhere
Mukund Sundararajan
Qiqi Yan
FAtt
GNN
22
128
0
30 May 2018
Did the Model Understand the Question?
Did the Model Understand the Question?
Pramod Kaushik Mudrakarta
Ankur Taly
Mukund Sundararajan
Kedar Dhamdhere
ELM
OOD
FAtt
27
196
0
14 May 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
42
5,865
0
04 Mar 2017
Understanding intermediate layers using linear classifier probes
Understanding intermediate layers using linear classifier probes
Guillaume Alain
Yoshua Bengio
FAtt
53
897
0
05 Oct 2016
Identifying individual facial expressions by deconstructing a neural
  network
Identifying individual facial expressions by deconstructing a neural network
F. Arbabzadah
G. Montavon
K. Müller
Wojciech Samek
CVBM
FAtt
30
31
0
23 Jun 2016
Previous
12