ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.10178
  4. Cited By
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

Nature Communications (Nat Commun), 2019
26 February 2019
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
ArXiv (abs)PDFHTML

Papers citing "Unmasking Clever Hans Predictors and Assessing What Machines Really Learn"

50 / 382 papers shown
Title
Simplifying the explanation of deep neural networks with sufficient and
  necessary feature-sets: case of text classification
Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification
Florentin Flambeau Jiechieu Kameni
Norbert Tsopzé
XAIFAttMedIm
89
1
0
08 Oct 2020
Geometric Disentanglement by Random Convex Polytopes
Geometric Disentanglement by Random Convex Polytopes
M. Joswig
M. Kaluba
Lukas Ruff
137
4
0
29 Sep 2020
Measure Utility, Gain Trust: Practical Advice for XAI Researcher
Measure Utility, Gain Trust: Practical Advice for XAI Researcher
B. Pierson
M. Glenski
William I. N. Sealy
Dustin L. Arendt
127
29
0
27 Sep 2020
A Unifying Review of Deep and Shallow Anomaly Detection
A Unifying Review of Deep and Shallow Anomaly DetectionProceedings of the IEEE (Proc. IEEE), 2020
Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
G. Montavon
Wojciech Samek
Matthias Kirchler
Thomas G. Dietterich
Klaus-Robert Muller
UQCV
488
920
0
24 Sep 2020
Survey of explainable machine learning with visual and granular methods
  beyond quasi-explanations
Survey of explainable machine learning with visual and granular methods beyond quasi-explanationsStudies in Computational Intelligence (SCI), 2020
Boris Kovalerchuk
M. Ahmad
University of Washington Tacoma
189
50
0
21 Sep 2020
Contextual Semantic Interpretability
Contextual Semantic InterpretabilityAsian Conference on Computer Vision (ACCV), 2020
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
207
28
0
18 Sep 2020
Generalization on the Enhancement of Layerwise Relevance
  Interpretability of Deep Neural Network
Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network
Erico Tjoa
Cuntai Guan
FAtt
155
0
0
05 Sep 2020
A Wholistic View of Continual Learning with Deep Neural Networks:
  Forgotten Lessons and the Bridge to Active and Open World Learning
A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World LearningNeural Networks (NN), 2020
Martin Mundt
Yongjun Hong
Iuliia Pliushch
Visvanathan Ramesh
CLL
287
171
0
03 Sep 2020
Langevin Cooling for Domain Translation
Langevin Cooling for Domain Translation
Vignesh Srinivasan
Klaus-Robert Muller
Wojciech Samek
Shinichi Nakajima
140
1
0
31 Aug 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
154
58
0
14 Aug 2020
Trustworthy AI Inference Systems: An Industry Research View
Trustworthy AI Inference Systems: An Industry Research View
Rosario Cammarota
M. Schunter
Anand Rajan
Fabian Boemer
Ágnes Kiss
...
Aydin Aysu
Fateme S. Hosseini
Chengmo Yang
Eric Wallace
Pam Norton
198
17
0
10 Aug 2020
Revisiting the Modifiable Areal Unit Problem in Deep Traffic Prediction
  with Visual Analytics
Revisiting the Modifiable Areal Unit Problem in Deep Traffic Prediction with Visual AnalyticsIEEE Transactions on Visualization and Computer Graphics (TVCG), 2020
Wei Zeng
Chengqiao Lin
Juncong Lin
Jincheng Jiang
Jiazhi Xia
Cagatay Turkay
Wei Chen
78
32
0
30 Jul 2020
Split and Expand: An inference-time improvement for Weakly Supervised
  Cell Instance Segmentation
Split and Expand: An inference-time improvement for Weakly Supervised Cell Instance Segmentation
Lin Geng Foo
Rui En Ho
Jiamei Sun
Alexander Binder
210
0
0
21 Jul 2020
Fairwashing Explanations with Off-Manifold Detergent
Fairwashing Explanations with Off-Manifold Detergent
Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
K. Müller
Pan Kessel
FAttFaML
140
102
0
20 Jul 2020
Explanation-Guided Training for Cross-Domain Few-Shot Classification
Explanation-Guided Training for Cross-Domain Few-Shot ClassificationInternational Conference on Pattern Recognition (ICPR), 2020
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Yunqing Zhao
Ngai-Man Cheung
Alexander Binder
227
107
0
17 Jul 2020
Explainable Deep Learning for Uncovering Actionable Scientific Insights
  for Materials Discovery and Design
Explainable Deep Learning for Uncovering Actionable Scientific Insights for Materials Discovery and Design
Shusen Liu
B. Kailkhura
Jize Zhang
A. Hiszpanski
Emily Robertson
Donald Loveland
T. Y. Han
102
2
0
16 Jul 2020
Explainable Deep One-Class Classification
Explainable Deep One-Class Classification
Philipp Liznerski
Lukas Ruff
Robert A. Vandermeulen
Billy Joe Franks
Matthias Kirchler
Klaus-Robert Muller
382
229
0
03 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
289
713
0
01 Jul 2020
Actionable Attribution Maps for Scientific Machine Learning
Actionable Attribution Maps for Scientific Machine Learning
Shusen Liu
B. Kailkhura
Jize Zhang
A. Hiszpanski
Emily Robertson
Donald Loveland
T. Y. Han
79
1
0
30 Jun 2020
The Clever Hans Effect in Anomaly Detection
The Clever Hans Effect in Anomaly Detection
Jacob R. Kauffmann
Lukas Ruff
G. Montavon
Klaus-Robert Muller
AAML
126
34
0
18 Jun 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Matthias Kirchler
UQCVFAtt
282
34
0
16 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
346
685
0
16 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
283
273
0
05 Jun 2020
Black-box Explanation of Object Detectors via Saliency Maps
Black-box Explanation of Object Detectors via Saliency Maps
Vitali Petsiuk
R. Jain
Varun Manjunatha
Vlad I. Morariu
Ashutosh Mehra
Vicente Ordonez
Kate Saenko
FAtt
222
145
0
05 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
519
299
0
29 May 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
264
69
0
18 May 2020
InteractionNet: Modeling and Explaining of Noncovalent Protein-Ligand
  Interactions with Noncovalent Graph Neural Network and Layer-Wise Relevance
  Propagation
InteractionNet: Modeling and Explaining of Noncovalent Protein-Ligand Interactions with Noncovalent Graph Neural Network and Layer-Wise Relevance Propagation
Hyeoncheol Cho
E. Lee
I. Choi
GNNFAtt
117
6
0
12 May 2020
Evaluation, Tuning and Interpretation of Neural Networks for
  Meteorological Applications
Evaluation, Tuning and Interpretation of Neural Networks for Meteorological ApplicationsBulletin of The American Meteorological Society - (BAMS) (BAMS), 2020
I. Ebert‐Uphoff
Kyle Hilburn
194
32
0
06 May 2020
Towards explainable classifiers using the counterfactual approach --
  global explanations for discovering bias in data
Towards explainable classifiers using the counterfactual approach -- global explanations for discovering bias in data
Agnieszka Mikołajczyk
M. Grochowski
Arkadiusz Kwasigroch
FAttCML
91
3
0
05 May 2020
A neural network walks into a lab: towards using deep nets as models for
  human behavior
A neural network walks into a lab: towards using deep nets as models for human behavior
Wei-Ying Ma
B. Peters
HAIAI4CE
154
59
0
02 May 2020
Attribution Analysis of Grammatical Dependencies in LSTMs
Attribution Analysis of Grammatical Dependencies in LSTMs
Sophie Hao
201
3
0
30 Apr 2020
Development and Interpretation of a Neural Network-Based Synthetic Radar
  Reflectivity Estimator Using GOES-R Satellite Observations
Development and Interpretation of a Neural Network-Based Synthetic Radar Reflectivity Estimator Using GOES-R Satellite ObservationsJournal of Applied Meteorology and Climatology (JAMC), 2020
Kyle Hilburn
I. Ebert‐Uphoff
S. Miller
128
58
0
16 Apr 2020
Shortcut Learning in Deep Neural Networks
Shortcut Learning in Deep Neural NetworksNature Machine Intelligence (NMI), 2020
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
818
2,374
0
16 Apr 2020
Explainable Image Classification with Evidence Counterfactual
Explainable Image Classification with Evidence Counterfactual
T. Vermeire
David Martens
FAtt
73
0
0
16 Apr 2020
Overinterpretation reveals image classification model pathologies
Overinterpretation reveals image classification model pathologiesNeural Information Processing Systems (NeurIPS), 2020
Brandon Carter
Siddhartha Jain
Jonas W. Mueller
David K Gifford
FAtt
211
55
0
19 Mar 2020
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Vulnerabilities of Connectionist AI Applications: Evaluation and DefenceFrontiers in Big Data (Front. Big Data), 2020
Christian Berghoff
Matthias Neu
Arndt von Twickel
AAML
165
26
0
18 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
335
86
0
17 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAIInformation Fusion (Inf. Fusion), 2020
L. Arras
Ahmed Osman
Wojciech Samek
XAIAAML
214
182
0
16 Mar 2020
Building and Interpreting Deep Similarity Models
Building and Interpreting Deep Similarity ModelsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Oliver Eberle
Jochen Büttner
Florian Kräutli
K. Müller
Matteo Valleriani
G. Montavon
146
76
0
11 Mar 2020
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
  Assurance Methodology
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance MethodologyMachine Learning and Knowledge Extraction (MLKE), 2020
Stefan Studer
T. Bui
C. Drescher
A. Hanuschkin
Ludwig Winkler
S. Peters
Klaus-Robert Muller
245
213
0
11 Mar 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)Minds and Machines (MM), 2019
Andrés Páez
165
214
0
22 Feb 2020
Forecasting Industrial Aging Processes with Machine Learning Methods
Forecasting Industrial Aging Processes with Machine Learning MethodsComputers and Chemical Engineering (CCE), 2020
Mihail Bogojeski
Simeon Sauer
F. Horn
K. Müller
AI4CE
141
22
0
05 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User StudyInternational Conference on Intelligent User Interfaces (IUI), 2020
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAMLFAttXAI
189
208
0
03 Feb 2020
Big-Data Science in Porous Materials: Materials Genomics and Machine
  Learning
Big-Data Science in Porous Materials: Materials Genomics and Machine LearningChemical Reviews (Chem. Rev.), 2020
Kevin Maik Jablonka
D. Ongari
S. M. Moosavi
B. Smit
AI4CE
203
398
0
18 Jan 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanationsNature Machine Intelligence (NMI), 2020
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
472
238
0
15 Jan 2020
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning
  Models
Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning ModelsInformation Fusion (Inf. Fusion), 2020
Jiamei Sun
Sebastian Lapuschkin
Wojciech Samek
Alexander Binder
FAtt
482
36
0
04 Jan 2020
Finding and Removing Clever Hans: Using Explanation Methods to Debug and
  Improve Deep Models
Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models
Christopher J. Anders
Talmaj Marinc
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
AAML
215
20
0
22 Dec 2019
Analysis of Video Feature Learning in Two-Stream CNNs on the Example of
  Zebrafish Swim Bout Classification
Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout ClassificationInternational Conference on Learning Representations (ICLR), 2019
Bennet Breier
A. Onken
73
4
0
20 Dec 2019
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Pruning by Explaining: A Novel Criterion for Deep Neural Network PruningPattern Recognition (Pattern Recognit.), 2019
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
209
236
0
18 Dec 2019
Analysing Deep Reinforcement Learning Agents Trained with Domain
  Randomisation
Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Tianhong Dai
Kai Arulkumaran
Tamara Gerbert
Samyakh Tukra
Feryal M. P. Behbahani
Anil Anthony Bharath
186
30
0
18 Dec 2019
Previous
12345678
Next