ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.10178
  4. Cited By
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

26 February 2019
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
ArXivPDFHTML

Papers citing "Unmasking Clever Hans Predictors and Assessing What Machines Really Learn"

50 / 107 papers shown
Title
AutoGPart: Intermediate Supervision Search for Generalizable 3D Part
  Segmentation
AutoGPart: Intermediate Supervision Search for Generalizable 3D Part Segmentation
Xueyi Liu
Xiaomeng Xu
Anyi Rao
Chuang Gan
L. Yi
3DPC
24
13
0
13 Mar 2022
Statistics and Deep Learning-based Hybrid Model for Interpretable
  Anomaly Detection
Statistics and Deep Learning-based Hybrid Model for Interpretable Anomaly Detection
Thabang Mathonsi
Terence L van Zyl
19
0
0
25 Feb 2022
Right for the Right Latent Factors: Debiasing Generative Models via
  Disentanglement
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
Xiaoting Shao
Karl Stelzner
Kristian Kersting
CML
DRL
22
3
0
01 Feb 2022
Systematic biases when using deep neural networks for annotating large
  catalogs of astronomical images
Systematic biases when using deep neural networks for annotating large catalogs of astronomical images
Sanchari Dhar
L. Shamir
3DPC
27
22
0
10 Jan 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
28
63
0
21 Dec 2021
Evaluating saliency methods on artificial data with different background
  types
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
13
5
0
09 Dec 2021
Scaling Up Influence Functions
Scaling Up Influence Functions
Andrea Schioppa
Polina Zablotskaia
David Vilar
Artem Sokolov
TDI
25
90
0
06 Dec 2021
Explainable multiple abnormality classification of chest CT volumes
Explainable multiple abnormality classification of chest CT volumes
R. Draelos
Lawrence Carin
MedIm
26
12
0
24 Nov 2021
A Practical guide on Explainable AI Techniques applied on Biomedical use
  case applications
A Practical guide on Explainable AI Techniques applied on Biomedical use case applications
Adrien Bennetot
Ivan Donadello
Ayoub El Qadi
M. Dragoni
Thomas Frossard
...
M. Trocan
Raja Chatila
Andreas Holzinger
Artur Garcez
Natalia Díaz Rodríguez
XAI
24
7
0
13 Nov 2021
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
Andreas Fürst
Elisabeth Rumetshofer
Johannes Lehner
Viet-Hung Tran
Fei Tang
...
David P. Kreil
Michael K Kopp
G. Klambauer
Angela Bitto-Nemling
Sepp Hochreiter
VLM
CLIP
199
102
0
21 Oct 2021
Machine learning methods for prediction of cancer driver genes: a survey
  paper
Machine learning methods for prediction of cancer driver genes: a survey paper
R. Andrades
M. R. Mendoza
11
26
0
28 Sep 2021
Toward a Unified Framework for Debugging Concept-based Models
Toward a Unified Framework for Debugging Concept-based Models
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
18
4
0
23 Sep 2021
Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement
  of Language Models
Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models
T. Klein
Moin Nabi
ReLM
LRM
27
8
0
10 Sep 2021
Survey of Low-Resource Machine Translation
Survey of Low-Resource Machine Translation
Barry Haddow
Rachel Bawden
Antonio Valerio Miceli Barone
Jindvrich Helcl
Alexandra Birch
AIMat
31
147
0
01 Sep 2021
Explaining Bayesian Neural Networks
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
28
25
0
23 Aug 2021
Interpretable SincNet-based Deep Learning for Emotion Recognition from
  EEG brain activity
Interpretable SincNet-based Deep Learning for Emotion Recognition from EEG brain activity
J. M. M. Torres
Mirco Ravanelli
Sara E. Medina-DeVilliers
M. Lerner
Giuseppe Riccardi
11
21
0
18 Jul 2021
Quantitative Evaluation of Explainable Graph Neural Networks for
  Molecular Property Prediction
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property Prediction
Jiahua Rao
Shuangjia Zheng
Yuedong Yang
21
46
0
01 Jul 2021
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
27
64
0
24 Jun 2021
False perfection in machine prediction: Detecting and assessing
  circularity problems in machine learning
False perfection in machine prediction: Detecting and assessing circularity problems in machine learning
Michael Hagmann
Stefan Riezler
16
1
0
23 Jun 2021
Speech is Silver, Silence is Golden: What do ASVspoof-trained Models
  Really Learn?
Speech is Silver, Silence is Golden: What do ASVspoof-trained Models Really Learn?
Nicolas M. Muller
Franziska Dieckmann
Pavel Czempin
Roman Canals
Konstantin Böttinger
Jennifer Williams
28
70
0
23 Jun 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
21
184
0
15 May 2021
Evading the Simplicity Bias: Training a Diverse Set of Models Discovers
  Solutions with Superior OOD Generalization
Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions with Superior OOD Generalization
Damien Teney
Ehsan Abbasnejad
Simon Lucey
A. Hengel
23
86
0
12 May 2021
SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and
  Nonlocal Effects
SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and Nonlocal Effects
Oliver T. Unke
Stefan Chmiela
M. Gastegger
Kristof T. Schütt
H. E. Sauceda
K. Müller
155
245
0
01 May 2021
Why AI is Harder Than We Think
Why AI is Harder Than We Think
Melanie Mitchell
23
95
0
26 Apr 2021
Towards a Collective Agenda on AI for Earth Science Data Analysis
Towards a Collective Agenda on AI for Earth Science Data Analysis
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
39
68
0
11 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label
  deep learning classification tasks in remote sensing
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
23
118
0
03 Apr 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
50
651
0
20 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
23
75
0
18 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
212
0
09 Mar 2021
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
14
88
0
30 Nov 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
14
106
0
25 Nov 2020
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
45
257
0
18 Nov 2020
It's All in the Name: A Character Based Approach To Infer Religion
It's All in the Name: A Character Based Approach To Infer Religion
Rochana Chaturvedi
Sugat Chaturvedi
11
23
0
27 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
34
7
0
23 Oct 2020
A Wholistic View of Continual Learning with Deep Neural Networks:
  Forgotten Lessons and the Bridge to Active and Open World Learning
A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning
Martin Mundt
Yongjun Hong
Iuliia Pliushch
Visvanathan Ramesh
CLL
22
146
0
03 Sep 2020
Revisiting the Modifiable Areal Unit Problem in Deep Traffic Prediction
  with Visual Analytics
Revisiting the Modifiable Areal Unit Problem in Deep Traffic Prediction with Visual Analytics
Wei Zeng
Chengqiao Lin
Juncong Lin
Jincheng Jiang
Jiazhi Xia
Cagatay Turkay
Wei-Neng Chen
16
27
0
30 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
25
625
0
01 Jul 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
16
31
0
16 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
21
215
0
05 Jun 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
16
63
0
18 May 2020
InteractionNet: Modeling and Explaining of Noncovalent Protein-Ligand
  Interactions with Noncovalent Graph Neural Network and Layer-Wise Relevance
  Propagation
InteractionNet: Modeling and Explaining of Noncovalent Protein-Ligand Interactions with Noncovalent Graph Neural Network and Layer-Wise Relevance Propagation
Hyeoncheol Cho
E. Lee
I. Choi
GNN
FAtt
12
4
0
12 May 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
38
82
0
17 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
149
0
16 Mar 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
22
197
0
03 Feb 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
13
207
0
15 Jan 2020
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
22
50
0
16 Dec 2019
Towards Best Practice in Explaining Neural Network Decisions with LRP
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
14
148
0
22 Oct 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
21
435
0
26 Sep 2019
Explaining and Interpreting LSTMs
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAtt
AI4TS
16
79
0
25 Sep 2019
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
41
92
0
27 Jul 2019
Previous
123
Next