ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.05228
  4. Cited By
Towards Better Explanations of Class Activation Mapping
v1v2v3 (latest)

Towards Better Explanations of Class Activation Mapping

10 February 2021
Hyungsik Jung
Youngrock Oh
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Towards Better Explanations of Class Activation Mapping"

34 / 34 papers shown
Title
Rethinking Explainability in the Era of Multimodal AI
Rethinking Explainability in the Era of Multimodal AI
Chirag Agarwal
9
0
0
16 Jun 2025
Visual Explanation via Similar Feature Activation for Metric Learning
Visual Explanation via Similar Feature Activation for Metric Learning
Yi Liao
Ugochukwu Ejike Akpudo
Jue Zhang
Yongsheng Gao
Jun Zhou
Wenyi Zeng
Weichuan Zhang
FAtt
39
0
0
02 Jun 2025
CAMs as Shapley Value-based Explainers
CAMs as Shapley Value-based Explainers
Huaiguang Cai
FAtt
90
1
0
09 Jan 2025
A Review of Multimodal Explainable Artificial Intelligence: Past,
  Present and Future
A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future
Shilin Sun
Wenbin An
Feng Tian
Fang Nan
Qidong Liu
Jing Liu
N. Shah
Ping Chen
147
6
0
18 Dec 2024
Concept Learning in the Wild: Towards Algorithmic Understanding of
  Neural Networks
Concept Learning in the Wild: Towards Algorithmic Understanding of Neural Networks
Elad Shohama
Hadar Cohena
Khalil Wattada
Havana Rikab
Dan Vilenchik
108
1
0
15 Dec 2024
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside
  CNN Models
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models
Yi Liao
Yongsheng Gao
Weichuan Zhang
100
0
0
02 Dec 2024
Explaining Object Detectors via Collective Contribution of Pixels
Explaining Object Detectors via Collective Contribution of Pixels
Toshinori Yamauchi
Hiroshi Kera
K. Kawamoto
ObjDFAtt
113
1
0
01 Dec 2024
Leveraging CAM Algorithms for Explaining Medical Semantic Segmentation
Leveraging CAM Algorithms for Explaining Medical Semantic Segmentation
Tillmann Rheude
Andreas Wirtz
Arjan Kuijper
Stefan Wesarg
50
1
0
30 Sep 2024
EmoCAM: Toward Understanding What Drives CNN-based Emotion Recognition
EmoCAM: Toward Understanding What Drives CNN-based Emotion Recognition
Youssef Doulfoukar
Laurent Mertens
Joost Vennekens
FAtt
72
0
0
19 Jul 2024
Giving each task what it needs -- leveraging structured sparsity for
  tailored multi-task learning
Giving each task what it needs -- leveraging structured sparsity for tailored multi-task learning
Richa Upadhyay
Ronald Phlypo
Rajkumar Saini
Marcus Liwicki
MoE
66
0
0
05 Jun 2024
Weakly-supervised Semantic Segmentation via Dual-stream Contrastive
  Learning of Cross-image Contextual Information
Weakly-supervised Semantic Segmentation via Dual-stream Contrastive Learning of Cross-image Contextual Information
Qi Lai
Chi-Man Vong
68
0
0
08 May 2024
CAPE: CAM as a Probabilistic Ensemble for Enhanced DNN Interpretation
CAPE: CAM as a Probabilistic Ensemble for Enhanced DNN Interpretation
T. Chowdhury
Kewen Liao
Vu Minh Hieu Phan
Minh-Son To
Yutong Xie
Kevin Hung
David Ross
Anton Van Den Hengel
Johan Verjans
Zhibin Liao
61
1
0
03 Apr 2024
PaPr: Training-Free One-Step Patch Pruning with Lightweight ConvNets for
  Faster Inference
PaPr: Training-Free One-Step Patch Pruning with Lightweight ConvNets for Faster Inference
Tanvir Mahmud
Burhaneddin Yaman
Chun-Hao Liu
Diana Marculescu
99
3
0
24 Mar 2024
Gradient based Feature Attribution in Explainable AI: A Technical Review
Gradient based Feature Attribution in Explainable AI: A Technical Review
Yongjie Wang
Tong Zhang
Xu Guo
Zhiqi Shen
XAI
72
25
0
15 Mar 2024
Towards Better Visualizing the Decision Basis of Networks via Unfold and
  Conquer Attribution Guidance
Towards Better Visualizing the Decision Basis of Networks via Unfold and Conquer Attribution Guidance
Jung-Ho Hong
Woo-Jeoung Nam
Kyu-Sung Jeon
Seong-Whan Lee
36
3
0
21 Dec 2023
Enhancing Post-Hoc Explanation Benchmark Reliability for Image
  Classification
Enhancing Post-Hoc Explanation Benchmark Reliability for Image Classification
T. Gomez
Harold Mouchère
FAtt
49
0
0
29 Nov 2023
Visual Explanations via Iterated Integrated Attributions
Visual Explanations via Iterated Integrated Attributions
Oren Barkan
Yehonatan Elisha
Yuval Asher
Amit Eshel
Noam Koenigstein
FAttXAI
38
18
0
28 Oct 2023
Learning to Explain: A Model-Agnostic Framework for Explaining Black Box
  Models
Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models
Oren Barkan
Yuval Asher
Amit Eshel
Yehonatan Elisha
Noam Koenigstein
62
5
0
25 Oct 2023
Deep Integrated Explanations
Deep Integrated Explanations
Oren Barkan
Yehonatan Elisha
Jonathan Weill
Yuval Asher
Amit Eshel
Noam Koenigstein
FAtt
98
7
0
23 Oct 2023
Trainable Noise Model as an XAI evaluation method: application on Sobol
  for remote sensing image segmentation
Trainable Noise Model as an XAI evaluation method: application on Sobol for remote sensing image segmentation
Hossein Shreim
Abdul Karim Gizzini
A. Ghandour
24
2
0
03 Oct 2023
Overview of Class Activation Maps for Visualization Explainability
Overview of Class Activation Maps for Visualization Explainability
Anh Pham Thi Minh
HAIFAtt
79
5
0
25 Sep 2023
Text-to-Image Models for Counterfactual Explanations: a Black-Box
  Approach
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
Guillaume Jeanneret
Loïc Simon
Frédéric Jurie
DiffM
95
13
0
14 Sep 2023
Rethinking Class Activation Maps for Segmentation: Revealing Semantic
  Information in Shallow Layers by Reducing Noise
Rethinking Class Activation Maps for Segmentation: Revealing Semantic Information in Shallow Layers by Reducing Noise
Hangcheng Dong
Yuhao Jiang
Yingyan Huang
Jing-Xiao Liao
Bingguo Liu
Dong Ye
Guodong Liu
31
1
0
04 Aug 2023
Feature Activation Map: Visual Explanation of Deep Learning Models for
  Image Classification
Feature Activation Map: Visual Explanation of Deep Learning Models for Image Classification
Yiwen Liao
Yongsheng Gao
Weichuan Zhang
FAtt
22
0
0
11 Jul 2023
Multimodal Explainable Artificial Intelligence: A Comprehensive Review
  of Methodological Advances and Future Research Directions
Multimodal Explainable Artificial Intelligence: A Comprehensive Review of Methodological Advances and Future Research Directions
N. Rodis
Christos Sardianos
Panagiotis I. Radoglou-Grammatikis
Panagiotis G. Sarigiannidis
Iraklis Varlamis
Georgios Th. Papadopoulos
104
23
0
09 Jun 2023
Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map
Yuguang Yang
Runtang Guo
Shen-Te Wu
Yimi Wang
Juan Zhang
Xuan Gong
Baochang Zhang
52
0
0
27 May 2023
Towards a Praxis for Intercultural Ethics in Explainable AI
Towards a Praxis for Intercultural Ethics in Explainable AI
Chinasa T. Okolo
58
3
0
24 Apr 2023
NAISR: A 3D Neural Additive Model for Interpretable Shape Representation
NAISR: A 3D Neural Additive Model for Interpretable Shape Representation
Yining Jiao
C. Zdanski
Julia Kimbell
Andrew Prince
Cameron P Worden
...
Christopher Rutter
Benjamin Shields
William Dunn
Jisan Mahmud
Marc Niethammer
102
3
0
16 Mar 2023
Empowering CAM-Based Methods with Capability to Generate Fine-Grained
  and High-Faithfulness Explanations
Empowering CAM-Based Methods with Capability to Generate Fine-Grained and High-Faithfulness Explanations
Changqing Qiu
Fusheng Jin
Yining Zhang
FAtt
57
3
0
16 Mar 2023
On Label Granularity and Object Localization
On Label Granularity and Object Localization
Elijah Cole
Kimberly Wilber
Grant Van Horn
Xuan S. Yang
Marco Fornoni
Pietro Perona
Serge Belongie
Andrew G. Howard
Oisin Mac Aodha
WSOL
84
13
0
20 Jul 2022
FD-CAM: Improving Faithfulness and Discriminability of Visual
  Explanation for CNNs
FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs
Hui Li
Zihao Li
Rui Ma
Tieru Wu
FAtt
38
9
0
17 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAttXAI
68
10
0
07 Jun 2022
Comparison of attention models and post-hoc explanation methods for
  embryo stage identification: a case study
Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study
T. Gomez
Thomas Fréour
Harold Mouchère
57
3
0
13 May 2022
Understanding CNNs from excitations
Understanding CNNs from excitations
Zijian Ying
Qianmu Li
Zhichao Lian
Jun Hou
Tong Lin
Tao Wang
AAMLFAtt
85
1
0
02 May 2022
1