ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.00507
  4. Cited By
Top-down Neural Attention by Excitation Backprop

Top-down Neural Attention by Excitation Backprop

1 August 2016
Jianming Zhang
Zhe Lin
Jonathan Brandt
Xiaohui Shen
Stan Sclaroff
ArXiv (abs)PDFHTML

Papers citing "Top-down Neural Attention by Excitation Backprop"

50 / 317 papers shown
Title
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
111
20
0
06 Nov 2022
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAMLFAttXAI
56
8
0
05 Nov 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
69
1
0
31 Oct 2022
Neural Networks are Decision Trees
Neural Networks are Decision Trees
Çağlar Aytekin
FAtt
98
25
0
11 Oct 2022
Ablation Path Saliency
Ablation Path Saliency
Justus Sagemüller
Olivier Verdier
FAttAAML
48
0
0
26 Sep 2022
Weakly Supervised Semantic Segmentation via Progressive Patch Learning
Weakly Supervised Semantic Segmentation via Progressive Patch Learning
Jinlong Li
Zequn Jie
Xu Wang
Yu Zhou
Xiaolin K. Wei
Lin Ma
VLM
126
20
0
16 Sep 2022
GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
  Graph Neural Networks
GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks
Xiaoqi Wang
Hang Shen
84
45
0
15 Sep 2022
Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers
Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers
Kevin Miao
Akash Gokul
Raghav Singh
Suzanne Petryk
Joseph E. Gonzalez
Kurt Keutzer
Trevor Darrell
Colorado Reed
ViTMedIm
60
6
0
07 Sep 2022
Generating detailed saliency maps using model-agnostic methods
Generating detailed saliency maps using model-agnostic methods
Maciej Sakowicz
FAtt
45
0
0
04 Sep 2022
TCAM: Temporal Class Activation Maps for Object Localization in
  Weakly-Labeled Unconstrained Videos
TCAM: Temporal Class Activation Maps for Object Localization in Weakly-Labeled Unconstrained Videos
Soufiane Belharbi
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
WSOL
106
14
0
30 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAttXAI
36
5
0
12 Aug 2022
Adaptive occlusion sensitivity analysis for visually explaining video
  recognition networks
Adaptive occlusion sensitivity analysis for visually explaining video recognition networks
Tomoki Uchiyama
Naoya Sogi
S. Iizuka
Koichiro Niinuma
Kazuhiro Fukui
63
2
0
26 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
103
0
0
04 Jul 2022
Improving Visual Grounding by Encouraging Consistent Gradient-based
  Explanations
Improving Visual Grounding by Encouraging Consistent Gradient-based Explanations
Ziyan Yang
Kushal Kafle
Franck Dernoncourt
Vicente Ordónez Román
VLM
90
25
0
30 Jun 2022
What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding
  without Text Inputs
What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs
Tal Shaharabany
Yoad Tewel
Lior Wolf
ObjD
91
16
0
19 Jun 2022
FD-CAM: Improving Faithfulness and Discriminability of Visual
  Explanation for CNNs
FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs
Hui Li
Zihao Li
Rui Ma
Tieru Wu
FAtt
38
9
0
17 Jun 2022
ELUDE: Generating interpretable explanations via a decomposition into
  labelled and unlabelled features
ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
V. V. Ramaswamy
Sunnie S. Y. Kim
Nicole Meister
Ruth C. Fong
Olga Russakovsky
FAtt
70
14
0
15 Jun 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and
  Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
55
2
0
15 Jun 2022
Large Loss Matters in Weakly Supervised Multi-Label Classification
Large Loss Matters in Weakly Supervised Multi-Label Classification
Youngwook Kim
Jae Myung Kim
Zeynep Akata
Jungwook Lee
NoLa
74
47
0
08 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent
  Attention in Medical Images
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAttMedIm
75
6
0
06 Jun 2022
Deletion and Insertion Tests in Regression Models
Deletion and Insertion Tests in Regression Models
Naofumi Hama
Masayoshi Mase
Art B. Owen
79
8
0
25 May 2022
Learnable Visual Words for Interpretable Image Recognition
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
100
2
0
22 May 2022
Towards Better Understanding Attribution Methods
Towards Better Understanding Attribution Methods
Sukrut Rao
Moritz Bohle
Bernt Schiele
XAI
89
33
0
20 May 2022
OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors
  on LiDAR Data
OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors on LiDAR Data
David Schinagl
Georg Krispel
Horst Possegger
P. Roth
Horst Bischof
3DPC
60
19
0
13 Apr 2022
HINT: Hierarchical Neuron Concept Explainer
HINT: Hierarchical Neuron Concept Explainer
Andong Wang
Wei-Ning Lee
Xiaojuan Qi
59
19
0
27 Mar 2022
Explainability in Graph Neural Networks: An Experimental Survey
Explainability in Graph Neural Networks: An Experimental Survey
Peibo Li
Yixing Yang
Maurice Pagnucco
Yang Song
63
31
0
17 Mar 2022
Do Explanations Explain? Model Knows Best
Do Explanations Explain? Model Knows Best
Ashkan Khakzar
Pedram J. Khorsandi
Rozhin Nobahari
Nassir Navab
XAIAAMLFAtt
38
24
0
04 Mar 2022
ADVISE: ADaptive Feature Relevance and VISual Explanations for
  Convolutional Neural Networks
ADVISE: ADaptive Feature Relevance and VISual Explanations for Convolutional Neural Networks
Mohammad Mahdi Dehshibi
Mona Ashtari-Majlan
Gereziher W. Adhane
David Masip
AAMLFAtt
45
3
0
02 Mar 2022
On Guiding Visual Attention with Language Specification
On Guiding Visual Attention with Language Specification
Suzanne Petryk
Lisa Dunlap
Keyan Nasseri
Joseph E. Gonzalez
Trevor Darrell
Anna Rohrbach
VLM
249
32
1
17 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAIELM
87
179
0
14 Feb 2022
Multi-Modal Knowledge Graph Construction and Application: A Survey
Multi-Modal Knowledge Graph Construction and Application: A Survey
Xiangru Zhu
Zhixu Li
Xiaodan Wang
Xueyao Jiang
Penglei Sun
Xuwu Wang
Yanghua Xiao
N. Yuan
73
167
0
11 Feb 2022
Keyword localisation in untranscribed speech using visually grounded
  speech models
Keyword localisation in untranscribed speech using visually grounded speech models
Kayode Olaleye
Dan Oneaţă
Herman Kamper
60
7
0
02 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
91
15
0
23 Jan 2022
Negative Evidence Matters in Interpretable Histology Image
  Classification
Negative Evidence Matters in Interpretable Histology Image Classification
Soufiane Belharbi
M. Pedersoli
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
107
12
0
07 Jan 2022
Class-Incremental Continual Learning into the eXtended DER-verse
Class-Incremental Continual Learning into the eXtended DER-verse
Matteo Boschini
Lorenzo Bonicelli
Pietro Buzzega
Angelo Porrello
Simone Calderara
CLLBDL
109
142
0
03 Jan 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
88
66
0
21 Dec 2021
RELAX: Representation Learning Explainability
RELAX: Representation Learning Explainability
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
43
14
0
19 Dec 2021
Neural Attention Models in Deep Learning: Survey and Taxonomy
Neural Attention Models in Deep Learning: Survey and Taxonomy
Alana de Santana Correia
Esther Colombini
MLAU
48
19
0
11 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
158
119
0
06 Dec 2021
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement
  Learning
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Cheng Wu
73
10
0
06 Dec 2021
Reinforcement Explanation Learning
Reinforcement Explanation Learning
Siddhant Agarwal
Owais Iqbal
Sree Aditya Buridi
Madda Manjusha
Abir Das
FAtt
23
0
0
26 Nov 2021
Image-specific Convolutional Kernel Modulation for Single Image
  Super-resolution
Image-specific Convolutional Kernel Modulation for Single Image Super-resolution
Yuanfei Huang
Jie Li
Yanting Hu
Xinbo Gao
Huan Huang
SupR
71
0
0
16 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
72
23
0
09 Nov 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAUFAttAAML
163
64
0
07 Nov 2021
Gradient Frequency Modulation for Visually Explaining Video
  Understanding Models
Gradient Frequency Modulation for Visually Explaining Video Understanding Models
Xinmiao Lin
Wentao Bao
Matthew Wright
Yu Kong
FAttAAML
69
2
0
01 Nov 2021
TorchEsegeta: Framework for Interpretability and Explainability of
  Image-based Deep Learning Models
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
78
15
0
16 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
128
12
0
11 Oct 2021
Fine-Grained Neural Network Explanation by Identifying Input Features
  with Predictive Information
Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information
Yang Zhang
Ashkan Khakzar
Yawei Li
Azade Farshad
Seong Tae Kim
Nassir Navab
FAttXAI
102
29
0
04 Oct 2021
Consistent Explanations by Contrastive Learning
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
70
21
0
01 Oct 2021
From Heatmaps to Structural Explanations of Image Classifiers
From Heatmaps to Structural Explanations of Image Classifiers
Li Fuxin
Zhongang Qi
Saeed Khorram
Vivswan Shitole
Prasad Tadepalli
Minsuk Kahng
Alan Fern
XAIFAtt
49
5
0
13 Sep 2021
Previous
1234567
Next