ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.00172
  4. Cited By
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks

Analyzing Classifiers: Fisher Vectors and Deep Neural Networks

1 December 2015
Sebastian Bach
Alexander Binder
G. Montavon
K. Müller
Wojciech Samek
ArXivPDFHTML

Papers citing "Analyzing Classifiers: Fisher Vectors and Deep Neural Networks"

40 / 40 papers shown
Title
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
44
0
0
28 Feb 2025
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Chong Wang
Yuanhong Chen
Fengbei Liu
Yuyuan Liu
Davis J. McCarthy
Helen Frazer
Gustavo Carneiro
29
1
0
30 Nov 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
29
12
0
26 Jul 2023
A Vulnerability of Attribution Methods Using Pre-Softmax Scores
A Vulnerability of Attribution Methods Using Pre-Softmax Scores
Miguel A. Lerma
Mirtha Lucas
FAtt
19
0
0
06 Jul 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
36
3
0
14 Feb 2023
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
27
12
0
09 Jun 2022
[Reproducibility Report] Explainable Deep One-Class Classification
[Reproducibility Report] Explainable Deep One-Class Classification
João P C Bertoldo
Etienne Decencière
23
0
0
06 Jun 2022
How explainable are adversarially-robust CNNs?
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
22
8
0
25 May 2022
Interpretability of Machine Learning Methods Applied to Neuroimaging
Interpretability of Machine Learning Methods Applied to Neuroimaging
Elina Thibeau-Sutre
S. Collin
Ninon Burgos
O. Colliot
16
4
0
14 Apr 2022
Explainable multiple abnormality classification of chest CT volumes
Explainable multiple abnormality classification of chest CT volumes
R. Draelos
Lawrence Carin
MedIm
34
12
0
24 Nov 2021
Explaining Bayesian Neural Networks
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
34
25
0
23 Aug 2021
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
33
64
0
24 Jun 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
68
415
0
15 Feb 2021
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
24
63
0
18 Dec 2020
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
24
88
0
30 Nov 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
27
31
0
16 Jun 2020
Towards Interpretable Deep Learning Models for Knowledge Tracing
Towards Interpretable Deep Learning Models for Knowledge Tracing
Yu Lu
De-Wu Wang
Qinggang Meng
Penghe Chen
17
36
0
13 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
On the Similarity of Deep Learning Representations Across Didactic and
  Adversarial Examples
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples
P. Douglas
F. Farahani
AAML
19
5
0
17 Feb 2020
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
36
50
0
16 Dec 2019
Towards Best Practice in Explaining Neural Network Decisions with LRP
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
22
148
0
22 Oct 2019
Label-PEnet: Sequential Label Propagation and Enhancement Networks for
  Weakly Supervised Instance Segmentation
Label-PEnet: Sequential Label Propagation and Enhancement Networks for Weakly Supervised Instance Segmentation
Weifeng Ge
Sheng Guo
Weilin Huang
Matthew R. Scott
22
48
0
07 Oct 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
436
0
26 Sep 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
33
11
0
09 Apr 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
17
996
0
26 Feb 2019
An Overview of Computational Approaches for Interpretation Analysis
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
21
2
0
09 Nov 2018
Explaining the Unique Nature of Individual Gait Patterns with Deep
  Learning
Explaining the Unique Nature of Individual Gait Patterns with Deep Learning
Fabian Horst
Sebastian Lapuschkin
Wojciech Samek
K. Müller
W. Schöllhorn
AI4CE
28
207
0
13 Aug 2018
Layer-wise Relevance Propagation for Explainable Recommendations
Layer-wise Relevance Propagation for Explainable Recommendations
Homanga Bharadhwaj
FAtt
21
8
0
17 Jul 2018
Towards computational fluorescence microscopy: Machine learning-based
  integrated prediction of morphological and molecular tumor profiles
Towards computational fluorescence microscopy: Machine learning-based integrated prediction of morphological and molecular tumor profiles
Alexander Binder
M. Bockmayr
Miriam Hagele
S. Wienert
Daniel Heim
...
M. Dietel
A. Hocke
C. Denkert
K. Müller
Frederick Klauschen
AI4CE
12
27
0
28 May 2018
Multi-Evidence Filtering and Fusion for Multi-Label Classification,
  Object Detection and Semantic Segmentation Based on Weakly Supervised
  Learning
Multi-Evidence Filtering and Fusion for Multi-Label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning
Weifeng Ge
Sibei Yang
Yizhou Yu
32
189
0
26 Feb 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Interpreting the Predictions of Complex ML Models by Layer-wise
  Relevance Propagation
Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation
Wojciech Samek
G. Montavon
Alexander Binder
Sebastian Lapuschkin
K. Müller
FAtt
AI4CE
40
48
0
24 Nov 2016
VisualBackProp: efficient visualization of CNNs
VisualBackProp: efficient visualization of CNNs
Mariusz Bojarski
A. Choromańska
K. Choromanski
Bernhard Firner
L. Jackel
Urs Muller
Karol Zieba
FAtt
30
74
0
16 Nov 2016
Understanding intermediate layers using linear classifier probes
Understanding intermediate layers using linear classifier probes
Guillaume Alain
Yoshua Bengio
FAtt
50
894
0
05 Oct 2016
Optimistic and Pessimistic Neural Networks for Scene and Object
  Recognition
Optimistic and Pessimistic Neural Networks for Scene and Object Recognition
René Grzeszick
Sebastian Sudholt
G. Fink
UQCV
36
4
0
26 Sep 2016
Identifying individual facial expressions by deconstructing a neural
  network
Identifying individual facial expressions by deconstructing a neural network
F. Arbabzadah
G. Montavon
K. Müller
Wojciech Samek
CVBM
FAtt
30
31
0
23 Jun 2016
Layer-wise Relevance Propagation for Neural Networks with Local
  Renormalization Layers
Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
Wojciech Samek
FAtt
22
453
0
04 Apr 2016
Controlling Explanatory Heatmap Resolution and Semantics via
  Decomposition Depth
Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth
Sebastian Bach
Alexander Binder
K. Müller
Wojciech Samek
FAtt
26
24
0
21 Mar 2016
Evaluating the visualization of what a Deep Neural Network has learned
Evaluating the visualization of what a Deep Neural Network has learned
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
74
1,180
0
21 Sep 2015
1