Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.07002
Cited By
Opti-CAM: Optimizing saliency maps for interpretability
17 January 2023
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Opti-CAM: Optimizing saliency maps for interpretability"
19 / 19 papers shown
Title
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui
Changkyu Choi
Andrey Barsky
Kangsoo Jung
Ernest Valveny
Dimosthenis Karatzas
18
0
0
12 May 2025
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GAN
FAtt
29
0
0
20 Jan 2025
CNN Explainability with Multivector Tucker Saliency Maps for Self-Supervised Models
Aymene Mohammed Bouayed
Samuel Deslauriers-Gauthier
Adrian Iaccovelli
D. Naccache
23
0
0
30 Oct 2024
The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach
David Bertoin
Eduardo Hugo Sanchez
Mehdi Zouitine
Emmanuel Rachelson
23
0
0
25 Sep 2024
PEPL: Precision-Enhanced Pseudo-Labeling for Fine-Grained Image Classification in Semi-Supervised Learning
Bowen Tian
Songning Lai
Lujundong Li
Zhihao Shuai
Runwei Guan
Tian Wu
Yutao Yue
VLM
21
0
0
05 Sep 2024
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
32
4
0
27 May 2024
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
13
0
0
23 Apr 2024
CA-Stream: Attention-based pooling for interpretable image recognition
Felipe Torres
Hanwei Zhang
R. Sicre
Stéphane Ayache
Yannis Avrithis
44
0
0
23 Apr 2024
CAM-Based Methods Can See through Walls
Magamed Taimeskhanov
R. Sicre
Damien Garreau
14
1
0
02 Apr 2024
On the stability, correctness and plausibility of visual explanation methods based on feature importance
Romain Xu-Darme
Jenny Benois-Pineau
R. Giot
Georges Quénot
Zakaria Chihani
M. Rousset
Alexey Zhukov
XAI
FAtt
12
1
0
25 Oct 2023
Explanation-based Training with Differentiable Insertion/Deletion Metric-aware Regularizers
Yuya Yoshikawa
Tomoharu Iwata
14
0
0
19 Oct 2023
SPADE: Sparsity-Guided Debugging for Deep Neural Networks
Arshia Soltani Moakhar
Eugenia Iofinova
Elias Frantar
Dan Alistarh
32
1
0
06 Oct 2023
UPDExplainer: an Interpretable Transformer-based Framework for Urban Physical Disorder Detection Using Street View Imagery
Chuanbo Hu
Shan Jia
Fan Zhang
C. Xiao
Mindi Ruan
Jacob Thrasher
Xin Li
18
10
0
04 May 2023
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAI
FAtt
64
41
0
31 Jan 2022
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
125
33
0
20 Apr 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
218
0
25 Feb 2021
Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Jason Phang
Jungkyu Park
Krzysztof J. Geras
FAtt
AAML
209
7
0
19 Oct 2020
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,233
0
24 Jun 2017
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
282
39,170
0
01 Sep 2014
1