ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.02686
  4. Cited By
Interpretable and Fine-Grained Visual Explanations for Convolutional
  Neural Networks

Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

7 August 2019
Jörg Wagner
Jan M. Köhler
Tobias Gindele
Leon Hetzel
Thaddäus Wiedemer
Sven Behnke
    AAML
    FAtt
ArXivPDFHTML

Papers citing "Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks"

25 / 25 papers shown
Title
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
29
0
0
18 Apr 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
84
0
0
23 Nov 2024
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
57
1
0
03 Nov 2024
Interpret the Predictions of Deep Networks via Re-Label Distillation
Interpret the Predictions of Deep Networks via Re-Label Distillation
Yingying Hua
Shiming Ge
Daichi Zhang
FAtt
33
0
0
20 Sep 2024
Human-inspired Explanations for Vision Transformers and Convolutional
  Neural Networks
Human-inspired Explanations for Vision Transformers and Convolutional Neural Networks
Mahadev Prasad Panda
Matteo Tiezzi
Martina Vilas
Gemma Roig
Bjoern M. Eskofier
Dario Zanca
ViT
AAML
41
1
0
04 Aug 2024
Feature CAM: Interpretable AI in Image Classification
Feature CAM: Interpretable AI in Image Classification
Frincy Clement
Ji Yang
Irene Cheng
FAtt
36
1
0
08 Mar 2024
Overview of Class Activation Maps for Visualization Explainability
Overview of Class Activation Maps for Visualization Explainability
Anh Pham Thi Minh
HAI
FAtt
38
5
0
25 Sep 2023
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
  Moreau Envelope
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope
Jingwei Zhang
Farzan Farnia
UQCV
36
3
0
08 Jan 2023
OCTET: Object-aware Counterfactual Explanations
OCTET: Object-aware Counterfactual Explanations
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
AAML
29
29
0
22 Nov 2022
NeuroMapper: In-browser Visualizer for Neural Network Training
NeuroMapper: In-browser Visualizer for Neural Network Training
Zhiyan Zhou
Kevin Li
Haekyu Park
Megan Dass
Austin P. Wright
Nilaksh Das
Duen Horng Chau
3DH
20
2
0
22 Oct 2022
Spatial-temporal Concept based Explanation of 3D ConvNets
Spatial-temporal Concept based Explanation of 3D ConvNets
Yi Ji
Yu Wang
K. Mori
Jien Kato
3DPC
FAtt
29
7
0
09 Jun 2022
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
19
43
0
17 Nov 2021
Gradient Frequency Modulation for Visually Explaining Video
  Understanding Models
Gradient Frequency Modulation for Visually Explaining Video Understanding Models
Xinmiao Lin
Wentao Bao
Matthew Wright
Yu Kong
FAtt
AAML
30
2
0
01 Nov 2021
Explainable, automated urban interventions to improve pedestrian and
  vehicle safety
Explainable, automated urban interventions to improve pedestrian and vehicle safety
Cristina Bustos
Daniel Rhoads
Albert Solé-Ribalta
David Masip
Alexandre Arenas
Àgata Lapedriza
Javier Borge-Holthoefer
26
27
0
22 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
27
11
0
11 Oct 2021
How to Certify Machine Learning Based Safety-critical Systems? A
  Systematic Literature Review
How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review
Florian Tambon
Gabriel Laberge
Le An
Amin Nikanjam
Paulina Stevia Nouwou Mindom
Y. Pequignot
Foutse Khomh
G. Antoniol
E. Merlo
François Laviolette
37
66
0
26 Jul 2021
Revisiting The Evaluation of Class Activation Mapping for
  Explainability: A Novel Metric and Experimental Analysis
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
131
33
0
20 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
27
6
0
19 Apr 2021
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Yuhki Hatakeyama
Hiroki Sakuma
Yoshinori Konishi
Kohei Suenaga
FAtt
22
3
0
06 Oct 2020
Counterfactual Explanation Based on Gradual Construction for Deep
  Networks
Counterfactual Explanation Based on Gradual Construction for Deep Networks
Hong G Jung
Sin-Han Kang
Hee-Dong Kim
Dong-Ok Won
Seong-Whan Lee
OOD
FAtt
25
22
0
05 Aug 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
Perturb More, Trap More: Understanding Behaviors of Graph Neural
  Networks
Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks
Chaojie Ji
Ruxin Wang
Hongyan Wu
31
7
0
21 Apr 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,119
0
22 Oct 2019
Interpreting Layered Neural Networks via Hierarchical Modular
  Representation
Interpreting Layered Neural Networks via Hierarchical Modular Representation
C. Watanabe
21
19
0
03 Oct 2018
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
308
5,847
0
08 Jul 2016
1