ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.10178
  4. Cited By
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

26 February 2019
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
ArXivPDFHTML

Papers citing "Unmasking Clever Hans Predictors and Assessing What Machines Really Learn"

50 / 107 papers shown
Title
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Philip Naumann
Jacob R. Kauffmann
G. Montavon
24
0
0
09 May 2025
Interactive Medical Image Analysis with Concept-based Similarity Reasoning
Ta Duc Huy
Sen Kim Tran
Phan Nguyen
Nguyen Hoang Tran
Tran Bao Sam
A. Hengel
Zhibin Liao
Johan W. Verjans
Minh Nguyen Nhat To
Vu Minh Hieu Phan
41
0
0
10 Mar 2025
Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization
Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization
Shunxin Wang
Raymond N. J. Veldhuis
N. Strisciuglio
VLM
71
0
0
05 Mar 2025
The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation
The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation
Martin Mundt
Anaelia Ovalle
Felix Friedrich
A Pranav
Subarnaduti Paul
Manuel Brack
Kristian Kersting
William Agnew
265
0
0
05 Feb 2025
Pfungst and Clever Hans: Identifying the unintended cues in a widely used Alzheimer's disease MRI dataset using explainable deep learning
Pfungst and Clever Hans: Identifying the unintended cues in a widely used Alzheimer's disease MRI dataset using explainable deep learning
C. Tinauer
Maximilian Sackl
Rudolf Stollberger
Stefan Ropele
C. Langkammer
AAML
35
0
0
27 Jan 2025
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
89
0
0
29 Nov 2024
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Lam Nguyen Tung
Steven Cho
Xiaoning Du
Neelofar Neelofar
Valerio Terragni
Stefano Ruberto
Aldeida Aleti
127
2
0
30 Oct 2024
Study on the Helpfulness of Explainable Artificial Intelligence
Study on the Helpfulness of Explainable Artificial Intelligence
Tobias Labarta
Elizaveta Kulicheva
Ronja Froelian
Christian Geißler
Xenia Melman
Julian von Klitzing
ELM
31
0
0
14 Oct 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
26
0
0
22 Sep 2024
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
93
0
0
14 Sep 2024
Revealing the Learning Process in Reinforcement Learning Agents Through Attention-Oriented Metrics
Revealing the Learning Process in Reinforcement Learning Agents Through Attention-Oriented Metrics
Charlotte Beylier
Simon M. Hofmann
Nico Scherf
26
0
0
20 Jun 2024
MambaLRP: Explaining Selective State Space Sequence Models
MambaLRP: Explaining Selective State Space Sequence Models
F. Jafari
G. Montavon
Klaus-Robert Müller
Oliver Eberle
Mamba
54
9
0
11 Jun 2024
Language-guided Detection and Mitigation of Unknown Dataset Bias
Language-guided Detection and Mitigation of Unknown Dataset Bias
Zaiying Zhao
Soichiro Kumano
Toshihiko Yamasaki
36
2
0
05 Jun 2024
Exposing Image Classifier Shortcuts with Counterfactual Frequency (CoF) Tables
Exposing Image Classifier Shortcuts with Counterfactual Frequency (CoF) Tables
James Hinns
David Martens
41
2
0
24 May 2024
Explaining Text Similarity in Transformer Models
Explaining Text Similarity in Transformer Models
Alexandros Vasileiou
Oliver Eberle
43
7
0
10 May 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
54
2
0
25 Apr 2024
Improving deep learning with prior knowledge and cognitive models: A
  survey on enhancing explainability, adversarial robustness and zero-shot
  learning
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
F. Mumuni
A. Mumuni
AAML
34
5
0
11 Mar 2024
Implementing local-explainability in Gradient Boosting Trees: Feature
  Contribution
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Ángel Delgado-Panadero
Beatriz Hernández-Lorca
María Teresa García-Ordás
J. Benítez-Andrades
32
52
0
14 Feb 2024
RudolfV: A Foundation Model by Pathologists for Pathologists
RudolfV: A Foundation Model by Pathologists for Pathologists
Jonas Dippel
Barbara Feulner
Tobias Winterhoff
Timo Milbich
Stephan Tietz
...
David Horst
Lukas Ruff
Klaus-Robert Muller
Frederick Klauschen
Maximilian Alber
25
28
0
08 Jan 2024
Towards Interpretable Classification of Leukocytes based on Deep
  Learning
Towards Interpretable Classification of Leukocytes based on Deep Learning
S. Röhrl
Johannes Groll
M. Lengl
Simon Schumann
C. Klenk
D. Heim
Martin Knopp
Oliver Hayden
Klaus Diepold
25
2
0
24 Nov 2023
Labeling Neural Representations with Inverse Recognition
Labeling Neural Representations with Inverse Recognition
Kirill Bykov
Laura Kopf
Shinichi Nakajima
Marius Kloft
Marina M.-C. Höhne
BDL
19
15
0
22 Nov 2023
Rethinking the Evaluating Framework for Natural Language Understanding
  in AI Systems: Language Acquisition as a Core for Future Metrics
Rethinking the Evaluating Framework for Natural Language Understanding in AI Systems: Language Acquisition as a Core for Future Metrics
Patricio Vera
Pedro Moya
Lisa Barraza
ELM
16
1
0
21 Sep 2023
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled
  Safety Critical Systems
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems
Saddek Bensalem
Chih-Hong Cheng
Wei Huang
Xiaowei Huang
Changshun Wu
Xingyu Zhao
AAML
19
6
0
20 Jul 2023
A Vulnerability of Attribution Methods Using Pre-Softmax Scores
A Vulnerability of Attribution Methods Using Pre-Softmax Scores
Miguel A. Lerma
Mirtha Lucas
FAtt
14
0
0
06 Jul 2023
Improving neural network representations using human similarity
  judgments
Improving neural network representations using human similarity judgments
Lukas Muttenthaler
Lorenz Linhardt
Jonas Dippel
Robert A. Vandermeulen
Katherine L. Hermann
Andrew Kyle Lampinen
Simon Kornblith
40
29
0
07 Jun 2023
One Explanation Does Not Fit XIL
One Explanation Does Not Fit XIL
Felix Friedrich
David Steinmann
Kristian Kersting
LRM
35
2
0
14 Apr 2023
Dialogue Games for Benchmarking Language Understanding: Motivation,
  Taxonomy, Strategy
Dialogue Games for Benchmarking Language Understanding: Motivation, Taxonomy, Strategy
David Schlangen
ELM
21
12
0
14 Apr 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
21
7
0
12 Apr 2023
Mark My Words: Dangers of Watermarked Images in ImageNet
Mark My Words: Dangers of Watermarked Images in ImageNet
Kirill Bykov
Klaus-Robert Muller
Marina M.-C. Höhne
40
4
0
09 Mar 2023
On the contribution of pre-trained models to accuracy and utility in
  modeling distributed energy resources
On the contribution of pre-trained models to accuracy and utility in modeling distributed energy resources
H. Kazmi
Pierre Pinson
11
0
0
22 Feb 2023
SpecXAI -- Spectral interpretability of Deep Learning Models
SpecXAI -- Spectral interpretability of Deep Learning Models
Stefan Druc
Peter Wooldridge
A. Krishnamurthy
S. Sarkar
Aditya Balu
17
0
0
20 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
21
3
0
14 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
37
3
0
07 Feb 2023
Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and
  reduced complexity
Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and reduced complexity
Vincent Souveton
Arnaud Guillin
J. Jasche
G. Lavaux
Manon Michel
18
3
0
03 Feb 2023
Cluster-CAM: Cluster-Weighted Visual Interpretation of CNNs' Decision in
  Image Classification
Cluster-CAM: Cluster-Weighted Visual Interpretation of CNNs' Decision in Image Classification
Zhenpeng Feng
H. Ji
M. Daković
Xiyang Cui
Mingzhe Zhu
Ljubisa Stankovic
19
7
0
03 Feb 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
43
17
0
30 Dec 2022
Topical Hidden Genome: Discovering Latent Cancer Mutational Topics using
  a Bayesian Multilevel Context-learning Approach
Topical Hidden Genome: Discovering Latent Cancer Mutational Topics using a Bayesian Multilevel Context-learning Approach
Saptarshi Chakraborty
Zoe Guan
C. Begg
R. Shen
16
2
0
30 Dec 2022
Criteria for Classifying Forecasting Methods
Criteria for Classifying Forecasting Methods
Tim Januschowski
Jan Gasthaus
Bernie Wang
David Salinas
Valentin Flunkert
Michael Bohlke-Schneider
Laurent Callot
AI4TS
21
173
0
07 Dec 2022
Localized Shortcut Removal
Localized Shortcut Removal
Nicolas M. Muller
Jochen Jacobs
Jennifer Williams
Konstantin Böttinger
19
0
0
24 Nov 2022
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals
  Learned Features Similar to Diagnostic Criteria
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals Learned Features Similar to Diagnostic Criteria
Theresa Bender
J. Beinecke
D. Krefting
Carolin Müller
Henning Dathe
T. Seidler
Nicolai Spicher
Anne-Christin Hauschild
FAtt
11
24
0
03 Nov 2022
The Debate Over Understanding in AI's Large Language Models
The Debate Over Understanding in AI's Large Language Models
Melanie Mitchell
D. Krakauer
ELM
74
202
0
14 Oct 2022
Shortcut Learning of Large Language Models in Natural Language
  Understanding
Shortcut Learning of Large Language Models in Natural Language Understanding
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Xia Hu
KELM
OffRL
28
83
0
25 Aug 2022
Discovering Bugs in Vision Models using Off-the-shelf Image Generation
  and Captioning
Discovering Bugs in Vision Models using Off-the-shelf Image Generation and Captioning
Olivia Wiles
Isabela Albuquerque
Sven Gowal
VLM
30
46
0
18 Aug 2022
How Robust is Unsupervised Representation Learning to Distribution
  Shift?
How Robust is Unsupervised Representation Learning to Distribution Shift?
Yuge Shi
Imant Daunhawer
Julia E. Vogt
Philip H. S. Torr
Amartya Sanyal
OOD
30
25
0
17 Jun 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
25
131
0
07 Jun 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
27
37
0
02 Jun 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
X. Huang
AAML
40
26
0
24 May 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
13
6
0
21 Apr 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
14
5
0
20 Apr 2022
Explainable Analysis of Deep Learning Methods for SAR Image
  Classification
Explainable Analysis of Deep Learning Methods for SAR Image Classification
Sheng Su
Ziteng Cui
Weiwei Guo
Zenghui Zhang
Wenxian Yu
XAI
25
12
0
14 Apr 2022
123
Next