ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.04595
  4. Cited By
Visualizing Deep Neural Network Decisions: Prediction Difference
  Analysis

Visualizing Deep Neural Network Decisions: Prediction Difference Analysis

15 February 2017
L. Zintgraf
Taco S. Cohen
T. Adel
Max Welling
    FAtt
ArXivPDFHTML

Papers citing "Visualizing Deep Neural Network Decisions: Prediction Difference Analysis"

28 / 328 papers shown
Title
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
17
3,900
0
06 Feb 2018
Explaining First Impressions: Modeling, Recognizing, and Explaining
  Apparent Personality from Videos
Explaining First Impressions: Modeling, Recognizing, and Explaining Apparent Personality from Videos
Hugo Jair Escalante
Heysem Kaya
A. A. Salah
Sergio Escalera
Yağmur Güçlütürk
...
Furkan Gürpinar
Achmadnoer Sukma Wicaksana
Cynthia C. S. Liem
Marcel van Gerven
R. Lier
20
61
0
02 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
17
809
0
02 Feb 2018
Understanding Deep Architectures by Visual Summaries
Understanding Deep Architectures by Visual Summaries
Marco Carletti
Marco Godi
Maedeh Aghaei
Francesco Giuliari
Marco Cristani
3DH
FAtt
19
1
0
27 Jan 2018
Visual Analytics in Deep Learning: An Interrogative Survey for the Next
  Frontiers
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Fred Hohman
Minsuk Kahng
Robert S. Pienta
Duen Horng Chau
OOD
HAI
19
537
0
21 Jan 2018
Evaluating neural network explanation methods using hybrid documents and
  morphological agreement
Evaluating neural network explanation methods using hybrid documents and morphological agreement
Nina Pörner
Benjamin Roth
Hinrich Schütze
4
9
0
19 Jan 2018
Efficient Image Evidence Analysis of CNN Classification Results
Efficient Image Evidence Analysis of CNN Classification Results
Keyang Zhou
Bernhard Kainz
AAML
FAtt
14
4
0
05 Jan 2018
Dropout Feature Ranking for Deep Learning Models
Dropout Feature Ranking for Deep Learning Models
C. Chang
Ladislav Rampášek
Anna Goldenberg
OOD
12
48
0
22 Dec 2017
An Introduction to Deep Visual Explanation
An Introduction to Deep Visual Explanation
H. Babiker
Randy Goebel
FAtt
AAML
22
19
0
26 Nov 2017
Relating Input Concepts to Convolutional Neural Network Decisions
Relating Input Concepts to Convolutional Neural Network Decisions
Ning Xie
Md Kamruzzaman Sarker
Derek Doran
Pascal Hitzler
M. Raymer
FAtt
18
15
0
21 Nov 2017
Autoencoder Node Saliency: Selecting Relevant Latent Representations
Autoencoder Node Saliency: Selecting Relevant Latent Representations
Y. Fan
19
29
0
21 Nov 2017
Using KL-divergence to focus Deep Visual Explanation
Using KL-divergence to focus Deep Visual Explanation
H. Babiker
Randy Goebel
FAtt
28
12
0
17 Nov 2017
Towards better understanding of gradient-based attribution methods for
  Deep Neural Networks
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Marco Ancona
Enea Ceolini
Cengiz Öztireli
Markus Gross
FAtt
14
145
0
16 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
17
677
0
02 Nov 2017
Human Understandable Explanation Extraction for Black-box Classification
  Models Based on Matrix Factorization
Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization
Jaedeok Kim
Ji-Hoon Seo
FAtt
13
8
0
18 Sep 2017
Machine learning methods for histopathological image analysis
Machine learning methods for histopathological image analysis
D. Komura
S. Ishikawa
11
692
0
04 Sep 2017
Explainable Artificial Intelligence: Understanding, Visualizing and
  Interpreting Deep Learning Models
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
34
1,172
0
28 Aug 2017
CNN Fixations: An unraveling approach to visualize the discriminative
  image regions
CNN Fixations: An unraveling approach to visualize the discriminative image regions
Konda Reddy Mopuri
Utsav Garg
R. Venkatesh Babu
AAML
14
56
0
22 Aug 2017
Self-explanatory Deep Salient Object Detection
Self-explanatory Deep Salient Object Detection
Huaxin Xiao
Jiashi Feng
Yunchao Wei
Maojun Zhang
XAI
FAtt
20
9
0
18 Aug 2017
Towards Interpretable Deep Neural Networks by Leveraging Adversarial
  Examples
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples
Yinpeng Dong
Hang Su
Jun Zhu
Fan Bao
AAML
14
126
0
18 Aug 2017
Modeling Latent Attention Within Neural Networks
Modeling Latent Attention Within Neural Networks
Christopher Grimm
Dilip Arumugam
Siddharth Karamcheti
David Abel
Lawson L. S. Wong
Michael L. Littman
13
1
0
02 Jun 2017
Towards Interrogating Discriminative Machine Learning Models
Towards Interrogating Discriminative Machine Learning Models
Wenbo Guo
Kaixuan Zhang
Lin Lin
Sui Huang
Xinyu Xing
FaML
13
4
0
23 May 2017
Learning how to explain neural networks: PatternNet and
  PatternAttribution
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAI
FAtt
16
337
0
16 May 2017
Understanding the Feedforward Artificial Neural Network Model From the
  Perspective of Network Flow
Understanding the Feedforward Artificial Neural Network Model From the Perspective of Network Flow
Dawei Dai
Weimin Tan
Hong Zhan
8
12
0
26 Apr 2017
Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR)
  Approach to Understanding Deep Neural Networks
Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks
Devinder Kumar
Alexander Wong
Graham W. Taylor
21
59
0
13 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
10
3,814
0
10 Apr 2017
Improving Interpretability of Deep Neural Networks with Semantic
  Information
Improving Interpretability of Deep Neural Networks with Semantic Information
Yinpeng Dong
Hang Su
Jun Zhu
Bo Zhang
11
120
0
12 Mar 2017
VisualBackProp: efficient visualization of CNNs
VisualBackProp: efficient visualization of CNNs
Mariusz Bojarski
A. Choromańska
K. Choromanski
Bernhard Firner
L. Jackel
Urs Muller
Karol Zieba
FAtt
20
74
0
16 Nov 2016
Previous
1234567