ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.00825
  4. Cited By
Layer-wise Relevance Propagation for Neural Networks with Local
  Renormalization Layers

Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers

4 April 2016
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
Wojciech Samek
    FAtt
ArXivPDFHTML

Papers citing "Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers"

50 / 83 papers shown
Title
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Fang Chen
Jianlong Zhou
FAtt
24
0
0
03 May 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
Extending Information Bottleneck Attribution to Video Sequences
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
3
0
03 Jan 2025
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability
Wen-Dong Jiang
Chih-Yung Chang
Show-Jane Yen
Diptendu Sinha Roy
FAtt
HAI
80
1
0
02 Dec 2024
Towards A Comprehensive Visual Saliency Explanation Framework for
  AI-based Face Recognition Systems
Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
Yuhang Lu
Zewei Xu
Touradj Ebrahimi
CVBM
FAtt
XAI
52
3
0
08 Jul 2024
Interpreting the Second-Order Effects of Neurons in CLIP
Interpreting the Second-Order Effects of Neurons in CLIP
Yossi Gandelsman
Alexei A. Efros
Jacob Steinhardt
MILM
62
16
0
06 Jun 2024
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance
  Propagation
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation
Paulo Yanez Sarmiento
Simon Witzke
Nadja Klein
Bernhard Y. Renard
FAtt
AAML
40
0
0
22 Apr 2024
Enhancing Efficiency in Vision Transformer Networks: Design Techniques
  and Insights
Enhancing Efficiency in Vision Transformer Networks: Design Techniques and Insights
Moein Heidari
Reza Azad
Sina Ghorbani Kolahi
René Arimond
Leon Niggemeier
...
Afshin Bozorgpour
Ehsan Khodapanah Aghdam
A. Kazerouni
I. Hacihaliloglu
Dorit Merhof
51
7
0
28 Mar 2024
Gradient based Feature Attribution in Explainable AI: A Technical Review
Gradient based Feature Attribution in Explainable AI: A Technical Review
Yongjie Wang
Tong Zhang
Xu Guo
Zhiqi Shen
XAI
29
19
0
15 Mar 2024
B-Cos Aligned Transformers Learn Human-Interpretable Features
B-Cos Aligned Transformers Learn Human-Interpretable Features
Manuel Tran
Amal Lahiani
Yashin Dicente Cid
Melanie Boxberg
Peter Lienemann
C. Matek
S. J. Wagner
Fabian J. Theis
Eldad Klaiman
Tingying Peng
MedIm
ViT
21
2
0
16 Jan 2024
HCDIR: End-to-end Hate Context Detection, and Intensity Reduction model
  for online comments
HCDIR: End-to-end Hate Context Detection, and Intensity Reduction model for online comments
Neeraj Kumar Singh
Koyel Ghosh
Joy Mahapatra
Utpal Garain
Apurbalal Senapati
22
0
0
20 Dec 2023
Improving Interpretation Faithfulness for Vision Transformers
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
41
5
0
29 Nov 2023
Auxiliary Losses for Learning Generalizable Concept-based Models
Auxiliary Losses for Learning Generalizable Concept-based Models
Ivaxi Sheth
Samira Ebrahimi Kahou
32
25
0
18 Nov 2023
Unveiling Black-boxes: Explainable Deep Learning Models for Patent
  Classification
Unveiling Black-boxes: Explainable Deep Learning Models for Patent Classification
Md. Shajalal
Sebastian Denef
Md. Rezaul Karim
Alexander Boden
Gunnar Stevens
XAI
24
5
0
31 Oct 2023
Poisoning Network Flow Classifiers
Poisoning Network Flow Classifiers
Giorgio Severi
Simona Boboila
Alina Oprea
J. Holodnak
K. Kratkiewicz
J. Matterer
AAML
38
4
0
02 Jun 2023
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with
  Differentiable Patch Masking
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
A. Nalmpantis
Apostolos Panagiotopoulos
John Gkountouras
Konstantinos Papakostas
Wilker Aziz
15
4
0
13 Apr 2023
Explanation of Face Recognition via Saliency Maps
Explanation of Face Recognition via Saliency Maps
Yuhang Lu
Touradj Ebrahimi
XAI
CVBM
13
3
0
12 Apr 2023
Tell Model Where to Attend: Improving Interpretability of Aspect-Based
  Sentiment Classification via Small Explanation Annotations
Tell Model Where to Attend: Improving Interpretability of Aspect-Based Sentiment Classification via Small Explanation Annotations
Zhenxiao Cheng
Jie Zhou
Wen Wu
Qin Chen
Liang He
32
3
0
21 Feb 2023
SpecXAI -- Spectral interpretability of Deep Learning Models
SpecXAI -- Spectral interpretability of Deep Learning Models
Stefan Druc
Peter Wooldridge
A. Krishnamurthy
S. Sarkar
Aditya Balu
25
0
0
20 Feb 2023
PAMI: partition input and aggregate outputs for model interpretation
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
26
3
0
07 Feb 2023
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text
  Classification
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification
Lingfeng Shen
Ze Zhang
Haiyun Jiang
Ying-Cong Chen
AAML
45
5
0
03 Feb 2023
Understanding and Detecting Hallucinations in Neural Machine Translation
  via Model Introspection
Understanding and Detecting Hallucinations in Neural Machine Translation via Model Introspection
Weijia Xu
Sweta Agrawal
Eleftheria Briakou
Marianna J. Martindale
Marine Carpuat
HILM
27
47
0
18 Jan 2023
Advances in Medical Image Analysis with Vision Transformers: A
  Comprehensive Review
Advances in Medical Image Analysis with Vision Transformers: A Comprehensive Review
Reza Azad
A. Kazerouni
Moein Heidari
Ehsan Khodapanah Aghdam
Amir Molaei
Yiwei Jia
Abin Jose
Rijo Roy
Dorit Merhof
MedIm
ViT
41
162
0
09 Jan 2023
MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal
  Contributions in Vision and Language Models & Tasks
MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks
Letitia Parcalabescu
Anette Frank
37
22
0
15 Dec 2022
Explainable Artificial Intelligence for Improved Modeling of Processes
Explainable Artificial Intelligence for Improved Modeling of Processes
Riza Velioglu
Jan Philip Göpfert
André Artelt
Barbara Hammer
AI4TS
22
4
0
01 Dec 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAML
FAtt
XAI
31
5
0
05 Nov 2022
AD-DROP: Attribution-Driven Dropout for Robust Language Model
  Fine-Tuning
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Tao Yang
Jinghao Deng
Xiaojun Quan
Qifan Wang
Shaoliang Nie
32
3
0
12 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
83
35
0
28 Sep 2022
Explanation Method for Anomaly Detection on Mixed Numerical and
  Categorical Spaces
Explanation Method for Anomaly Detection on Mixed Numerical and Categorical Spaces
Iñigo López-Riobóo Botana
Carlos Eiras-Franco
Julio César Hernández Castro
Amparo Alonso-Betanzos
27
0
0
09 Sep 2022
A general-purpose method for applying Explainable AI for Anomaly
  Detection
A general-purpose method for applying Explainable AI for Anomaly Detection
John Sipple
Abdou Youssef
27
15
0
23 Jul 2022
Creating an Explainable Intrusion Detection System Using Self Organizing
  Maps
Creating an Explainable Intrusion Detection System Using Self Organizing Maps
Jesse Ables
Thomas Kirby
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
35
13
0
15 Jul 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
56
71
0
13 Jul 2022
Benchmarking Heterogeneous Treatment Effect Models through the Lens of
  Interpretability
Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability
Jonathan Crabbé
Alicia Curth
Ioana Bica
M. Schaar
CML
24
16
0
16 Jun 2022
Towards better Interpretable and Generalizable AD detection using
  Collective Artificial Intelligence
Towards better Interpretable and Generalizable AD detection using Collective Artificial Intelligence
H. Nguyen
Michael Clement
Boris Mansencal
Pierrick Coupé
MedIm
39
6
0
07 Jun 2022
Lack of Fluency is Hurting Your Translation Model
Lack of Fluency is Hurting Your Translation Model
J. Yoo
Jaewoo Kang
23
0
0
24 May 2022
A graph-transformer for whole slide image classification
A graph-transformer for whole slide image classification
Yi Zheng
R. Gindra
Emily J. Green
E. Burks
Margrit Betke
J. Beane
V. Kolachalama
MedIm
50
123
0
19 May 2022
ViTOL: Vision Transformer for Weakly Supervised Object Localization
ViTOL: Vision Transformer for Weakly Supervised Object Localization
Saurav Gupta
Sourav Lakhotia
Abhay Rawat
Rahul Tallamraju
WSOL
36
21
0
14 Apr 2022
No Token Left Behind: Explainability-Aided Image Classification and
  Generation
No Token Left Behind: Explainability-Aided Image Classification and Generation
Roni Paiss
Hila Chefer
Lior Wolf
VLM
34
29
0
11 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
34
10
0
31 Mar 2022
A Rigorous Study of Integrated Gradients Method and Extensions to
  Internal Neuron Attributions
A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions
Daniel Lundstrom
Tianjian Huang
Meisam Razaviyayn
FAtt
24
64
0
24 Feb 2022
XAI in the context of Predictive Process Monitoring: Too much to Reveal
XAI in the context of Predictive Process Monitoring: Too much to Reveal
Ghada Elkhawaga
Mervat Abuelkheir
M. Reichert
20
1
0
16 Feb 2022
Interpretable Low-Resource Legal Decision Making
Interpretable Low-Resource Legal Decision Making
R. Bhambhoria
Hui Liu
Samuel Dahan
Xiao-Dan Zhu
ELM
AILaw
32
9
0
01 Jan 2022
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement
  Learning
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Cheng Wu
34
9
0
06 Dec 2021
Inducing Causal Structure for Interpretable Neural Networks
Inducing Causal Structure for Interpretable Neural Networks
Atticus Geiger
Zhengxuan Wu
Hanson Lu
J. Rozner
Elisa Kreiss
Thomas Icard
Noah D. Goodman
Christopher Potts
CML
OOD
35
71
0
01 Dec 2021
Improving Scheduled Sampling with Elastic Weight Consolidation for
  Neural Machine Translation
Improving Scheduled Sampling with Elastic Weight Consolidation for Neural Machine Translation
Michalis Korakakis
Andreas Vlachos
CLL
31
2
0
13 Sep 2021
Discretized Integrated Gradients for Explaining Language Models
Discretized Integrated Gradients for Explaining Language Models
Soumya Sanyal
Xiang Ren
FAtt
17
53
0
31 Aug 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
  Features in Images
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
42
3
0
25 Aug 2021
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision
  Transformers
IA-RED2^22: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Yikang Shen
Yi Ding
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
39
153
0
23 Jun 2021
12
Next