ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.04595
  4. Cited By
Visualizing Deep Neural Network Decisions: Prediction Difference
  Analysis

Visualizing Deep Neural Network Decisions: Prediction Difference Analysis

15 February 2017
L. Zintgraf
Taco S. Cohen
T. Adel
Max Welling
    FAtt
ArXivPDFHTML

Papers citing "Visualizing Deep Neural Network Decisions: Prediction Difference Analysis"

50 / 328 papers shown
Title
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
17
11
0
09 Apr 2019
Visualization of Convolutional Neural Networks for Monocular Depth
  Estimation
Visualization of Convolutional Neural Networks for Monocular Depth Estimation
Junjie Hu
Yan Zhang
Takayuki Okatani
MDE
17
83
0
06 Apr 2019
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
Christian Rupprecht
Cyril Ibrahim
C. Pal
13
32
0
02 Apr 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for
  Shapley Values Approximation
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
14
223
0
26 Mar 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly
  well on ImageNet
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
15
557
0
20 Mar 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
20
1,282
0
10 Mar 2019
Aggregating explanation methods for stable and robust explainability
Aggregating explanation methods for stable and robust explainability
Laura Rieger
Lars Kai Hansen
AAML
FAtt
27
11
0
01 Mar 2019
Capacity allocation through neural network layers
Capacity allocation through neural network layers
Jonathan Donier
9
3
0
22 Feb 2019
Capacity allocation analysis of neural networks: A tool for principled
  architecture design
Capacity allocation analysis of neural networks: A tool for principled architecture design
Jonathan Donier
14
4
0
12 Feb 2019
Learning Decision Trees Recurrently Through Communication
Learning Decision Trees Recurrently Through Communication
Stephan Alaniz
Diego Marcos
Bernt Schiele
Zeynep Akata
17
16
0
05 Feb 2019
Visual Rationalizations in Deep Reinforcement Learning for Atari Games
Visual Rationalizations in Deep Reinforcement Learning for Atari Games
L. Weitkamp
Elise van der Pol
Zeynep Akata
8
27
0
01 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
17
445
0
27 Jan 2019
Learning Global Pairwise Interactions with Bayesian Neural Networks
Learning Global Pairwise Interactions with Bayesian Neural Networks
Tianyu Cui
Pekka Marttinen
Samuel Kaski
BDL
11
17
0
24 Jan 2019
SISC: End-to-end Interpretable Discovery Radiomics-Driven Lung Cancer
  Prediction via Stacked Interpretable Sequencing Cells
SISC: End-to-end Interpretable Discovery Radiomics-Driven Lung Cancer Prediction via Stacked Interpretable Sequencing Cells
Vignesh Sankar
Devinder Kumar
David A Clausi
Graham W. Taylor
Alexander Wong
11
22
0
15 Jan 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
21
1,415
0
14 Jan 2019
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
M. Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
8
50
0
18 Dec 2018
Efficient Interpretation of Deep Learning Models Using Graph Structure
  and Cooperative Game Theory: Application to ASD Biomarker Discovery
Efficient Interpretation of Deep Learning Models Using Graph Structure and Cooperative Game Theory: Application to ASD Biomarker Discovery
Xiaoxiao Li
Nicha Dvornek
Yuan Zhou
Juntang Zhuang
P. Ventola
James S. Duncan
132
19
0
14 Dec 2018
Interpretable Graph Convolutional Neural Networks for Inference on Noisy
  Knowledge Graphs
Interpretable Graph Convolutional Neural Networks for Inference on Noisy Knowledge Graphs
Daniel Neil
Joss Briody
A. Lacoste
Aaron Sim
Páidí Creed
Amir Saffari
GNN
66
35
0
01 Dec 2018
Rank Projection Trees for Multilevel Neural Network Interpretation
Rank Projection Trees for Multilevel Neural Network Interpretation
J. Warrell
Hussein Mohsen
M. Gerstein
FAtt
14
0
0
01 Dec 2018
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
23
102
0
28 Nov 2018
A Visual Interaction Framework for Dimensionality Reduction Based Data
  Exploration
A Visual Interaction Framework for Dimensionality Reduction Based Data Exploration
M. Cavallo
Çağatay Demiralp
6
55
0
28 Nov 2018
Deformable ConvNets v2: More Deformable, Better Results
Deformable ConvNets v2: More Deformable, Better Results
Xizhou Zhu
Han Hu
Stephen Lin
Jifeng Dai
ObjD
22
1,980
0
27 Nov 2018
Data Augmentation using Random Image Cropping and Patching for Deep CNNs
Data Augmentation using Random Image Cropping and Patching for Deep CNNs
Ryo Takahashi
Takashi Matsubara
K. Uehara
9
326
0
22 Nov 2018
How You See Me
How You See Me
Rohit Gandikota
Deepak Mishra
OOD
14
0
0
20 Nov 2018
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector
  Predictions
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions
Denis A. Gudovskiy
Alec Hodgkinson
Takuya Yamaguchi
Yasunori Ishii
Sotaro Tsukizawa
FAtt
14
13
0
19 Nov 2018
An Overview of Computational Approaches for Interpretation Analysis
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
19
2
0
09 Nov 2018
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Explaining Deep Learning Models - A Bayesian Non-parametric Approach
Wenbo Guo
Sui Huang
Yunzhe Tao
Xinyu Xing
Lin Lin
BDL
11
47
0
07 Nov 2018
What evidence does deep learning model use to classify Skin Lesions?
What evidence does deep learning model use to classify Skin Lesions?
Xiaoxiao Li
Junyan Wu
Eric Z. Chen
Hongda Jiang
11
9
0
02 Nov 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
11
128
0
08 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
12
1,926
0
08 Oct 2018
Diagnosing Convolutional Neural Networks using their Spectral Response
Diagnosing Convolutional Neural Networks using their Spectral Response
V. Stamatescu
Mark D Mcdonnell
14
3
0
08 Oct 2018
A Gradient-Based Split Criterion for Highly Accurate and Transparent
  Model Trees
A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees
Klaus Broelemann
Gjergji Kasneci
14
20
0
25 Sep 2018
Ensemble learning with 3D convolutional neural networks for
  connectome-based prediction
Ensemble learning with 3D convolutional neural networks for connectome-based prediction
Meenakshi Khosla
K. Jamison
Amy Kuceyeski
M. Sabuncu
3DV
6
88
0
11 Sep 2018
Brain Biomarker Interpretation in ASD Using Deep Learning and fMRI
Brain Biomarker Interpretation in ASD Using Deep Learning and fMRI
Xiaoxiao Li
Nicha Dvornek
Juntang Zhuang
P. Ventola
James S. Duncan
26
70
0
23 Aug 2018
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
22
1,071
0
31 Jul 2018
Regional Multi-scale Approach for Visually Pleasing Explanations of Deep
  Neural Networks
Regional Multi-scale Approach for Visually Pleasing Explanations of Deep Neural Networks
Dasom Seo
Kanghan Oh
Il-Seok Oh
FAtt
17
23
0
31 Jul 2018
Computationally Efficient Measures of Internal Neuron Importance
Computationally Efficient Measures of Internal Neuron Importance
Avanti Shrikumar
Jocelin Su
A. Kundaje
FAtt
11
29
0
26 Jul 2018
Grounding Visual Explanations
Grounding Visual Explanations
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
6
225
0
25 Jul 2018
Explaining Image Classifiers by Counterfactual Generation
Explaining Image Classifiers by Counterfactual Generation
C. Chang
Elliot Creager
Anna Goldenberg
D. Duvenaud
VLM
11
264
0
20 Jul 2018
Women also Snowboard: Overcoming Bias in Captioning Models (Extended
  Abstract)
Women also Snowboard: Overcoming Bias in Captioning Models (Extended Abstract)
Lisa Anne Hendricks
Kaylee Burns
Kate Saenko
Trevor Darrell
Anna Rohrbach
14
477
0
02 Jul 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
16
670
0
28 Jun 2018
Quantum-chemical insights from interpretable atomistic neural networks
Quantum-chemical insights from interpretable atomistic neural networks
Kristof T. Schütt
M. Gastegger
A. Tkatchenko
K. Müller
AI4CE
15
31
0
27 Jun 2018
Hierarchical interpretations for neural network predictions
Hierarchical interpretations for neural network predictions
Chandan Singh
W. James Murdoch
Bin Yu
15
145
0
14 Jun 2018
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent
  Neural Networks on Electronic Medical Records
RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records
Bum Chul Kwon
Min-Je Choi
J. Kim
E. Choi
Young Bin Kim
Soonwook Kwon
Jimeng Sun
Jaegul Choo
25
251
0
28 May 2018
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class
  Models
Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models
Jacob R. Kauffmann
K. Müller
G. Montavon
DRL
28
96
0
16 May 2018
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Hendrik Strobelt
Sebastian Gehrmann
M. Behrisch
Adam Perer
Hanspeter Pfister
Alexander M. Rush
VLM
HAI
23
239
0
25 Apr 2018
Opening the black box of neural nets: case studies in stop/top
  discrimination
Opening the black box of neural nets: case studies in stop/top discrimination
Thomas Roxlo
M. Reece
8
22
0
24 Apr 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
14
217
0
20 Mar 2018
Multimodal Explanations: Justifying Decisions and Pointing to the
  Evidence
Multimodal Explanations: Justifying Decisions and Pointing to the Evidence
Dong Huk Park
Lisa Anne Hendricks
Zeynep Akata
Anna Rohrbach
Bernt Schiele
Trevor Darrell
Marcus Rohrbach
35
418
0
15 Feb 2018
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
Shoaib Ahmed Siddiqui
Dominique Mercier
Mohsin Munir
Andreas Dengel
Sheraz Ahmed
FAtt
AI4TS
24
82
0
08 Feb 2018
Previous
1234567
Next