Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
1902.10178
Cited By
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Nature Communications (Nat Commun), 2019
26 February 2019
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Unmasking Clever Hans Predictors and Assessing What Machines Really Learn"
50 / 383 papers shown
Title
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
G. Cabour
A. Morales
É. Ledoux
S. Bassetto
122
5
0
02 Jun 2021
Optimal Sampling Density for Nonparametric Regression
Danny Panknin
Klaus-Robert Muller
Shinichi Nakajima
111
2
0
25 May 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
265
31
0
21 May 2021
Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Deutsche Jahrestagung für Künstliche Intelligenz (KI), 2020
Johannes Rabold
Gesina Schwalbe
Ute Schmid
94
23
0
16 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Data mining and knowledge discovery (DMKD), 2021
Gesina Schwalbe
Bettina Finzel
XAI
349
258
0
15 May 2021
Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions with Superior OOD Generalization
Computer Vision and Pattern Recognition (CVPR), 2021
Damien Teney
Ehsan Abbasnejad
Simon Lucey
Anton Van Den Hengel
446
100
0
12 May 2021
Towards Benchmarking the Utility of Explanations for Model Debugging
Maximilian Idahl
Lijun Lyu
U. Gadiraju
Avishek Anand
XAI
139
19
0
10 May 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
232
72
0
05 May 2021
SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and Nonlocal Effects
Nature Communications (Nat Commun), 2021
Oliver T. Unke
Stefan Chmiela
M. Gastegger
Kristof T. Schütt
H. E. Sauceda
K. Müller
429
309
0
01 May 2021
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Sebastian Houben
Stephanie Abrecht
Maram Akila
Andreas Bär
Felix Brockherde
...
Serin Varghese
Michael Weber
Sebastian J. Wirkert
Tim Wirtz
Matthias Woehrle
AAML
285
61
0
29 Apr 2021
Why AI is Harder Than We Think
Melanie Mitchell
252
119
0
26 Apr 2021
Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Julia Rosenzweig
Joachim Sicking
Sebastian Houben
Michael Mock
Maram Akila
AAML
254
3
0
22 Apr 2021
Explainable artificial intelligence for mechanics: physics-informing neural networks for constitutive models
A. Koeppe
F. Bamer
M. Selzer
B. Nestler
Bernd Markert
PINN
AI4CE
312
9
0
20 Apr 2021
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
Data mining and knowledge discovery (DMKD), 2021
Dieter Brughmans
Pieter Leyman
David Martens
217
79
0
15 Apr 2021
A Conceptual Framework for Establishing Trust in Real World Intelligent Systems
Cognitive Systems Research (CSR), 2021
Michael Guckert
Nils Gumpfer
J. Hannig
Till Keller
N. Urquhart
86
3
0
12 Apr 2021
Towards a Collective Agenda on AI for Earth Science Data Analysis
IEEE Geoscience and Remote Sensing Magazine (GRSM), 2021
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
199
80
0
11 Apr 2021
White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks
Meghna P. Ayyar
J. Benois-Pineau
A. Zemmari
FAtt
139
23
0
06 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
International Journal of Applied Earth Observation and Geoinformation (JAEOG), 2021
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
160
140
0
03 Apr 2021
STARdom: an architecture for trusted and secure human-centered manufacturing systems
Advances in Production Management Systems (APMS), 2021
Jože M. Rožanec
Patrik Zajec
K. Kenda
I. Novalija
B. Fortuna
...
Diego Reforgiato Recupero
D. Kyriazis
G. Sofianidis
Spyros Theodoropoulos
John Soldatos
149
9
0
02 Apr 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Statistics Survey (Stat. Surv.), 2021
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
391
840
0
20 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Environmental Data Science (EDS), 2021
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
311
87
0
18 Mar 2021
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
424
277
0
09 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Information Fusion (Inf. Fusion), 2021
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
319
214
0
07 Mar 2021
Deep Learning Based Decision Support for Medicine -- A Case Study on Skin Cancer Diagnosis
Adriano Lucieri
Andreas Dengel
Sheraz Ahmed
185
8
0
02 Mar 2021
Uncertainty Quantification by Ensemble Learning for Computational Optical Form Measurements
L. Hoffmann
I. Fortmeier
Clemens Elster
UQCV
145
34
0
01 Mar 2021
KANDINSKYPatterns -- An experimental exploration environment for Pattern Analysis and Machine Intelligence
Andreas Holzinger
Anna Saranti
Heimo Mueller
250
11
0
28 Feb 2021
PredDiff: Explanations and Interactions from Conditional Expectations
Artificial Intelligence (AI), 2021
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
FAtt
251
22
0
26 Feb 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Data mining and knowledge discovery (DMKD), 2021
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
318
278
0
25 Feb 2021
Bandits for Learning to Explain from Explanations
Freya Behrens
Stefano Teso
Davide Mottin
FAtt
102
1
0
07 Feb 2021
MC-LSTM: Mass-Conserving LSTM
International Conference on Machine Learning (ICML), 2021
Pieter-Jan Hoedt
Frederik Kratzert
D. Klotz
Christina Halmich
Markus Holzleitner
G. Nearing
Sepp Hochreiter
Günter Klambauer
199
67
0
13 Jan 2021
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAtt
XAI
237
23
0
31 Dec 2020
A Survey on Neural Network Interpretability
IEEE Transactions on Emerging Topics in Computational Intelligence (IEEE TETCI), 2020
Yu Zhang
Peter Tiño
A. Leonardis
Shengcai Liu
FaML
XAI
462
814
0
28 Dec 2020
GANterfactual - Counterfactual Explanations for Medical Non-Experts using Generative Adversarial Learning
Frontiers in Artificial Intelligence (FAI), 2020
Silvan Mertes
Tobias Huber
Katharina Weitz
Alexander Heimerl
Elisabeth André
GAN
AAML
MedIm
288
102
0
22 Dec 2020
Towards Robust Explanations for Deep Neural Networks
Pattern Recognition (Pattern Recognit.), 2020
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
195
66
0
18 Dec 2020
Interpreting Deep Neural Networks with Relative Sectional Propagation by Analyzing Comparative Gradients and Hostile Activations
AAAI Conference on Artificial Intelligence (AAAI), 2020
Woo-Jeoung Nam
Jaesik Choi
Seong-Whan Lee
FAtt
AAML
258
17
0
07 Dec 2020
Explaining Predictions of Deep Neural Classifier via Activation Analysis
M. Stano
Wanda Benesova
L. Marták
FAtt
AAML
HAI
118
4
0
03 Dec 2020
TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
Knowledge Discovery and Data Mining (KDD), 2020
João Bento
Pedro Saleiro
André F. Cruz
Mário A. T. Figueiredo
P. Bizarro
FAtt
AI4TS
313
123
0
30 Nov 2020
Explaining Deep Learning Models for Structured Data using Layer-Wise Relevance Propagation
hsan Ullah
André Ríos
Vaibhav Gala
Susan Mckeever
FAtt
178
10
0
26 Nov 2020
Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization
Nature Communications (Nat Commun), 2020
T. Han
S. Nebelung
F. Pedersoli
Markus Zimmermann
M. Schulze-Hagen
...
Christoph Haarburger
Fabian Kiessling
Christiane Kuhl
Volkmar Schulz
Daniel Truhn
MedIm
116
37
0
25 Nov 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Computer Vision and Pattern Recognition (CVPR), 2020
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
452
126
0
25 Nov 2020
Gradient Starvation: A Learning Proclivity in Neural Networks
Neural Information Processing Systems (NeurIPS), 2020
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
448
302
0
18 Nov 2020
Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks
R. Draelos
Lawrence Carin
FAtt
282
137
0
17 Nov 2020
Debiasing Convolutional Neural Networks via Meta Orthogonalization
Kurtis Evan David
Qiang Liu
Ruth C. Fong
FaML
85
3
0
15 Nov 2020
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
266
198
0
10 Nov 2020
Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering
Quan Hung Tran
Nhan Dam
T. Lai
Franck Dernoncourt
Trung Le
Nham Le
Dinh Q. Phung
FAtt
97
4
0
05 Nov 2020
MAPS-X: Explainable Multi-Robot Motion Planning via Segmentation
IEEE International Conference on Robotics and Automation (ICRA), 2020
Justin Kottinger
Shaull Almagor
Morteza Lahijanian
225
10
0
30 Oct 2020
It's All in the Name: A Character Based Approach To Infer Religion
Social Science Research Network (SSRN), 2020
Rochana Chaturvedi
Sugat Chaturvedi
155
24
0
27 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
209
7
0
23 Oct 2020
Machine Learning Force Fields
Oliver T. Unke
Stefan Chmiela
H. E. Sauceda
M. Gastegger
I. Poltavsky
Kristof T. Schütt
A. Tkatchenko
K. Müller
AI4CE
317
1,127
0
14 Oct 2020
Integrating Intrinsic and Extrinsic Explainability: The Relevance of Understanding Neural Networks for Human-Robot Interaction
Tom Weber
S. Wermter
80
4
0
09 Oct 2020
Previous
1
2
3
4
5
6
7
8
Next