Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2001.05371
Cited By
Making deep neural networks right for the right scientific reasons by interacting with their explanations
15 January 2020
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Making deep neural networks right for the right scientific reasons by interacting with their explanations"
50 / 121 papers shown
Title
Better Decisions through the Right Causal World Model
Elisabeth Dillies
Quentin Delfosse
Jannis Blüml
Raban Emunds
Florian Peter Busch
Kristian Kersting
34
0
0
09 Apr 2025
Human-in-the-loop or AI-in-the-loop? Automate or Collaborate?
S. Natarajan
Saurabh Mathur
Sahil Sidheekh
Wolfgang Stammer
Kristian Kersting
81
3
0
18 Dec 2024
Learning Visually Grounded Domain Ontologies via Embodied Conversation and Explanation
Jonghyuk Park
A. Lascarides
S. Ramamoorthy
73
0
0
13 Dec 2024
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Lam Nguyen Tung
Steven Cho
Xiaoning Du
Neelofar Neelofar
Valerio Terragni
Stefano Ruberto
Aldeida Aleti
139
2
0
30 Oct 2024
Effective Guidance for Model Attention with Simple Yes-no Annotations
Seongmin Lee
Ali Payani
Duen Horng Chau
FAtt
45
0
0
29 Oct 2024
Interactive Explainable Anomaly Detection for Industrial Settings
Daniel Gramelt
Timon Höfer
Ute Schmid
AAML
HAI
26
1
0
01 Oct 2024
To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems
Gaole He
Abri Bharos
U. Gadiraju
29
4
0
22 Sep 2024
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
35
3
0
19 Sep 2024
Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features
Thomas Schnake
Farnoush Rezaei Jafaria
Jonas Lederer
Ping Xiong
Shinichi Nakajima
Stefan Gugler
G. Montavon
Klaus-Robert Müller
40
3
0
30 Aug 2024
The Clever Hans Effect in Unsupervised Learning
Jacob R. Kauffmann
Jonas Dippel
Lukas Ruff
Wojciech Samek
Klaus-Robert Müller
G. Montavon
SSL
CML
HAI
39
1
0
15 Aug 2024
Problem Solving Through Human-AI Preference-Based Cooperation
Subhabrata Dutta
Timo Kaufmann
Goran Glavas
Ivan Habernal
Kristian Kersting
Frauke Kreuter
Mira Mezini
Iryna Gurevych
Eyke Hüllermeier
Hinrich Schuetze
95
1
0
14 Aug 2024
xAI-Drop: Don't Use What You Cannot Explain
Vincenzo Marco De Luca
Antonio Longa
Andrea Passerini
Pietro Lio'
40
0
0
29 Jul 2024
Spurious Correlations in Concept Drift: Can Explanatory Interaction Help?
Cristiana Lalletti
Stefano Teso
27
1
0
23 Jul 2024
SLIM: Spuriousness Mitigation with Minimal Human Annotations
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Kwan-Liu Ma
42
2
0
08 Jul 2024
Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density
Peiyu Yang
Naveed Akhtar
Mubarak Shah
Ajmal Saeed Mian
AAML
31
1
0
05 Jul 2024
Model Guidance via Explanations Turns Image Classifiers into Segmentation Models
Xiaoyan Yu
Jannik Franzen
Wojciech Samek
Marina M.-C. Höhne
Dagmar Kainmueller
36
0
0
03 Jul 2024
A Moonshot for AI Oracles in the Sciences
Bryan Kaiser
Tailin Wu
Maike Sonnewald
Colin Thackray
Skylar Callis
AI4CE
51
0
0
25 Jun 2024
Neural Concept Binder
Wolfgang Stammer
Antonia Wüst
David Steinmann
Kristian Kersting
OCL
34
4
0
14 Jun 2024
HackAtari: Atari Learning Environments for Robust and Continual Reinforcement Learning
Quentin Delfosse
Jannis Blüml
Bjarne Gregori
Kristian Kersting
31
7
0
06 Jun 2024
Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning
Hector Kohler
Quentin Delfosse
R. Akrour
Kristian Kersting
Philippe Preux
62
14
0
23 May 2024
An Explanatory Model Steering System for Collaboration between Domain Experts and AI
Aditya Bhattacharya
Simone Stumpf
K. Verbert
26
4
0
17 May 2024
Representation Debiasing of Generated Data Involving Domain Experts
Aditya Bhattacharya
Simone Stumpf
K. Verbert
34
2
0
17 May 2024
When a Relation Tells More Than a Concept: Exploring and Evaluating Classifier Decisions with CoReX
Bettina Finzel
Patrick Hilme
Johannes Rabold
Ute Schmid
38
1
0
02 May 2024
Towards a Research Community in Interpretable Reinforcement Learning: the InterpPol Workshop
Hector Kohler
Quentin Delfosse
Paul Festor
Philippe Preux
32
0
0
16 Apr 2024
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression
Dilyara Bareeva
Maximilian Dreyer
Frederik Pahde
Wojciech Samek
Sebastian Lapuschkin
KELM
67
1
0
15 Apr 2024
Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda
Johannes Schneider
83
26
0
15 Apr 2024
Learning To Guide Human Decision Makers With Vision-Language Models
Debodeep Banerjee
Stefano Teso
Burcu Sayin
Andrea Passerini
34
1
0
25 Mar 2024
A survey on Concept-based Approaches For Model Improvement
Avani Gupta
P. J. Narayanan
LRM
29
5
0
21 Mar 2024
Towards a general framework for improving the performance of classifiers using XAI methods
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
22
0
0
15 Mar 2024
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
F. Mumuni
A. Mumuni
AAML
37
5
0
11 Mar 2024
Right on Time: Revising Time Series Models by Constraining their Explanations
Maurice Kraus
David Steinmann
Antonia Wüst
Andre Kokozinski
Kristian Kersting
AI4TS
40
4
0
20 Feb 2024
Pix2Code: Learning to Compose Neural Visual Concepts as Programs
Antonia Wüst
Wolfgang Stammer
Quentin Delfosse
D. Dhami
Kristian Kersting
49
13
0
13 Feb 2024
Where is the Truth? The Risk of Getting Confounded in a Continual World
Florian Peter Busch
Roshni Kamath
Rupert Mitchell
Wolfgang Stammer
Kristian Kersting
Martin Mundt
CML
CLL
32
4
0
09 Feb 2024
EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations
Aditya Bhattacharya
Simone Stumpf
Lucija Gosak
Gregor Stiglic
K. Verbert
52
18
0
01 Feb 2024
Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
Quentin Delfosse
Sebastian Sztwiertnia
M. Rothermel
Wolfgang Stammer
Kristian Kersting
49
18
0
11 Jan 2024
Prompt-driven Latent Domain Generalization for Medical Image Classification
Siyuan Yan
Chi Liu
Zhen Yu
Lie Ju
Dwarikanath Mahapatra
B. Betz‐Stablein
Victoria Mar
Monika Janda
Peter Soyer
Zongyuan Ge
OOD
VLM
MedIm
35
7
0
05 Jan 2024
Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
Aditya Bhattacharya
13
6
0
29 Dec 2023
Data-Centric Digital Agriculture: A Perspective
R. Roscher
Lukas Roth
C. Stachniss
Achim Walter
10
2
0
06 Dec 2023
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations
Maximilian Dreyer
Reduan Achtibat
Wojciech Samek
Sebastian Lapuschkin
32
10
0
28 Nov 2023
Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Avani Gupta
Saurabh Saini
P. J. Narayanan
25
6
0
26 Nov 2023
Be Careful When Evaluating Explanations Regarding Ground Truth
Hubert Baniecki
Maciej Chrabaszcz
Andreas Holzinger
Bastian Pfeifer
Anna Saranti
P. Biecek
FAtt
AAML
46
3
0
08 Nov 2023
The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research
Thomas Decker
Ralf Gross
Alexander Koebler
Michael Lebacher
Ronald Schnitzer
Stefan H. Weber
33
2
0
11 Oct 2023
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation
Robert Litschko
Max Müller-Eberstein
Rob van der Goot
Leon Weber
Barbara Plank
LRM
21
2
0
09 Oct 2023
Lessons Learned from EXMOS User Studies: A Technical Report Summarizing Key Takeaways from User Studies Conducted to Evaluate The EXMOS Platform
Aditya Bhattacharya
Simone Stumpf
Lucija Gosak
Gregor Stiglic
K. Verbert
24
6
0
03 Oct 2023
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation
Sidney Bender
Christopher J. Anders
Pattarawat Chormai
Heike Marxfeld
J. Herrmann
G. Montavon
CML
25
1
0
02 Oct 2023
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
X. J. Yang
Boyang Albert Li
23
3
0
25 Sep 2023
Targeted Activation Penalties Help CNNs Ignore Spurious Signals
Dekai Zhang
Matthew Williams
Francesca Toni
AAML
12
1
0
22 Sep 2023
Towards an MLOps Architecture for XAI in Industrial Applications
Leonhard Faubel
Thomas Woudsma
Leila Methnani
Amir Ghorbani Ghezeljhemeidan
Fabian Buelow
...
Willem D. van Driel
Benjamin Kloepper
Andreas Theodorou
Mohsen Nosratinia
Magnus Bång
11
4
0
22 Sep 2023
Learning by Self-Explaining
Wolfgang Stammer
Felix Friedrich
David Steinmann
Manuel Brack
Hikaru Shindo
Kristian Kersting
20
7
0
15 Sep 2023
Distance-Aware eXplanation Based Learning
Misgina Tsighe Hagos
Niamh Belton
Kathleen M. Curran
Brian Mac Namee
FAtt
23
0
0
11 Sep 2023
1
2
3
Next