ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.00605
  4. Cited By
Relative Attributing Propagation: Interpreting the Comparative
  Contributions of Individual Units in Deep Neural Networks

Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

1 April 2019
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
    FAtt
ArXivPDFHTML

Papers citing "Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks"

18 / 18 papers shown
Title
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
Seongun Kim
Sol A Kim
Geonhyeong Kim
Enver Menadjiev
Chanwoo Lee
Seongwook Chung
Nari Kim
Jaesik Choi
28
0
0
15 May 2025
Probing Network Decisions: Capturing Uncertainties and Unveiling Vulnerabilities Without Label Information
Youngju Joung
Sehyun Lee
Jaesik Choi
AAML
53
1
0
12 Mar 2025
Multiple Different Black Box Explanations for Image Classifiers
Multiple Different Black Box Explanations for Image Classifiers
Hana Chockler
D. A. Kelly
Daniel Kroening
FAtt
21
0
0
25 Sep 2023
It Ain't That Bad: Understanding the Mysterious Performance Drop in OOD
  Generalization for Generative Transformer Models
It Ain't That Bad: Understanding the Mysterious Performance Drop in OOD Generalization for Generative Transformer Models
Xingcheng Xu
Zihao Pan
Haipeng Zhang
Yanqing Yang
LRM
21
2
0
16 Aug 2023
Interpretable Diabetic Retinopathy Diagnosis based on Biomarker
  Activation Map
Interpretable Diabetic Retinopathy Diagnosis based on Biomarker Activation Map
P. Zang
T. Hormel
Jie Wang
Yukun Guo
Steven T. Bailey
C. Flaxel
David Huang
T. Hwang
Yali Jia
MedIm
25
7
0
13 Dec 2022
Generalizability Analysis of Graph-based Trajectory Predictor with
  Vectorized Representation
Generalizability Analysis of Graph-based Trajectory Predictor with Vectorized Representation
Juanwu Lu
Wei Zhan
Masayoshi Tomizuka
Yeping Hu
22
6
0
06 Aug 2022
Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing
  Methods
Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing Methods
Ricards Marcinkevics
Ece Ozkan
Julia E. Vogt
25
18
0
26 Jul 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
35
38
0
02 Jun 2022
From Modern CNNs to Vision Transformers: Assessing the Performance,
  Robustness, and Classification Strategies of Deep Learning Models in
  Histopathology
From Modern CNNs to Vision Transformers: Assessing the Performance, Robustness, and Classification Strategies of Deep Learning Models in Histopathology
Maximilian Springenberg
A. Frommholz
M. Wenzel
Eva Weicken
Jackie Ma
Nils Strodthoff
MedIm
30
42
0
11 Apr 2022
Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
  Label Mixing
Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
Joonhyung Park
J. Yang
Jinwoo Shin
Sung Ju Hwang
Eunho Yang
33
23
0
16 Dec 2021
Focus! Rating XAI Methods and Finding Biases
Focus! Rating XAI Methods and Finding Biases
Anna Arias-Duart
Ferran Parés
Dario Garcia-Gasulla
Victor Gimenez-Abalos
26
32
0
28 Sep 2021
Generic Attention-model Explainability for Interpreting Bi-Modal and
  Encoder-Decoder Transformers
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Hila Chefer
Shir Gur
Lior Wolf
ViT
31
303
0
29 Mar 2021
Explainability Guided Multi-Site COVID-19 CT Classification
Explainability Guided Multi-Site COVID-19 CT Classification
Ameen Ali
Tal Shaharabany
Lior Wolf
19
4
0
25 Mar 2021
Explanations for Occluded Images
Explanations for Occluded Images
Hana Chockler
Daniel Kroening
Youcheng Sun
22
21
0
05 Mar 2021
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
45
644
0
17 Dec 2020
Counterfactual Explanation Based on Gradual Construction for Deep
  Networks
Counterfactual Explanation Based on Gradual Construction for Deep Networks
Hong G Jung
Sin-Han Kang
Hee-Dong Kim
Dong-Ok Won
Seong-Whan Lee
OOD
FAtt
25
22
0
05 Aug 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1