ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.06302
  4. Cited By
Visual Explanation by Interpretation: Improving Visual Feedback
  Capabilities of Deep Neural Networks

Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

18 December 2017
José Oramas
Kaili Wang
Tinne Tuytelaars
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks"

13 / 13 papers shown
Title
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
26
0
0
22 Sep 2024
Comprehensive Attribution: Inherently Explainable Vision Model with
  Feature Detector
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
40
3
0
27 Jul 2024
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
32
32
0
11 Aug 2023
Precise Benchmarking of Explainable AI Attribution Methods
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
19
4
0
06 Aug 2023
Towards the Characterization of Representations Learned via
  Capsule-based Network Architectures
Towards the Characterization of Representations Learned via Capsule-based Network Architectures
Saja AL-Tawalbeh
José Oramas
15
1
0
09 May 2023
Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity
  Analysis
Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis
Haoyu He
Yuede Ji
H. H. Huang
20
20
0
26 Mar 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
21
3
0
14 Feb 2023
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
32
169
0
13 Jan 2021
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
149
0
16 Mar 2020
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
20
82
0
31 May 2019
Leveraging Latent Features for Local Explanations
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
38
37
0
29 May 2019
Do semantic parts emerge in Convolutional Neural Networks?
Do semantic parts emerge in Convolutional Neural Networks?
Abel Gonzalez-Garcia
Davide Modolo
V. Ferrari
150
113
0
13 Jul 2016
MatConvNet - Convolutional Neural Networks for MATLAB
MatConvNet - Convolutional Neural Networks for MATLAB
Andrea Vedaldi
Karel Lenc
183
2,947
0
15 Dec 2014
1