ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.00547
  4. Cited By
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
  Stop Worrying and Love the Social and Behavioural Sciences

Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

2 December 2017
Tim Miller
Piers Howe
L. Sonenberg
    AI4TS
    SyDa
ArXivPDFHTML

Papers citing "Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences"

50 / 56 papers shown
Title
Beware of "Explanations" of AI
Beware of "Explanations" of AI
David Martens
Galit Shmueli
Theodoros Evgeniou
Kevin Bauer
Christian Janiesch
...
Claudia Perlich
Wouter Verbeke
Alona Zharova
Patrick Zschech
F. Provost
31
0
0
09 Apr 2025
GraphXAIN: Narratives to Explain Graph Neural Networks
GraphXAIN: Narratives to Explain Graph Neural Networks
Mateusz Cedro
David Martens
62
0
0
04 Nov 2024
A Mechanistic Explanatory Strategy for XAI
A Mechanistic Explanatory Strategy for XAI
Marcin Rabiza
59
1
0
02 Nov 2024
An Actionability Assessment Tool for Explainable AI
An Actionability Assessment Tool for Explainable AI
Ronal Singh
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
Piers Howe
Paul Dourish
27
2
0
19 Jun 2024
Fiper: a Visual-based Explanation Combining Rules and Feature Importance
Fiper: a Visual-based Explanation Combining Rules and Feature Importance
Eleonora Cappuccio
D. Fadda
Rosa Lanzilotti
Salvatore Rinzivillo
FAtt
42
1
0
25 Apr 2024
How should AI decisions be explained? Requirements for Explanations from
  the Perspective of European Law
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Benjamin Frész
Elena Dubovitskaya
Danilo Brajovic
Marco F. Huber
Christian Horz
59
7
0
19 Apr 2024
Reason to explain: Interactive contrastive explanations (REASONX)
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State
Salvatore Ruggieri
Franco Turini
LRM
32
1
0
29 May 2023
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
36
1
0
20 May 2023
Explaining Model Confidence Using Counterfactuals
Explaining Model Confidence Using Counterfactuals
Thao Le
Tim Miller
Ronal Singh
L. Sonenberg
21
4
0
10 Mar 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
38
7
0
07 Feb 2023
Machine Learning in Transaction Monitoring: The Prospect of xAI
Machine Learning in Transaction Monitoring: The Prospect of xAI
Julie Gerlings
Ioanna D. Constantiou
17
2
0
14 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
43
109
0
02 Oct 2022
Explainable Reinforcement Learning on Financial Stock Trading using SHAP
Explainable Reinforcement Learning on Financial Stock Trading using SHAP
Satyam Kumar
Mendhikar Vishal
V. Ravi
AIFin
34
8
0
18 Aug 2022
"If it didn't happen, why would I change my decision?": How Judges
  Respond to Counterfactual Explanations for the Public Safety Assessment
"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
Yaniv Yacoby
Ben Green
Christopher L. Griffin
Finale Doshi Velez
21
16
0
11 May 2022
Explain yourself! Effects of Explanations in Human-Robot Interaction
Explain yourself! Effects of Explanations in Human-Robot Interaction
Jakob Ambsdorf
A. Munir
Yiyao Wei
Klaas Degkwitz
Harm Matthias Harms
...
Kyra Ahrens
Dennis Becker
Erik Strahl
Tom Weber
S. Wermter
27
8
0
09 Apr 2022
Explainability Is in the Mind of the Beholder: Establishing the
  Foundations of Explainable Artificial Intelligence
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
44
21
0
29 Dec 2021
Combining Sub-Symbolic and Symbolic Methods for Explainability
Combining Sub-Symbolic and Symbolic Methods for Explainability
Anna Himmelhuber
S. Grimm
Sonja Zillner
Mitchell Joblin
Martin Ringsquandl
Thomas Runkler
21
5
0
03 Dec 2021
On Two XAI Cultures: A Case Study of Non-technical Explanations in
  Deployed AI System
On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System
Helen Jiang
Erwen Senge
25
7
0
02 Dec 2021
Making Things Explainable vs Explaining: Requirements and Challenges
  under the GDPR
Making Things Explainable vs Explaining: Requirements and Challenges under the GDPR
Francesco Sovrano
F. Vitali
M. Palmirani
35
10
0
02 Oct 2021
Trustworthy AI and Robotics and the Implications for the AEC Industry: A
  Systematic Literature Review and Future Potentials
Trustworthy AI and Robotics and the Implications for the AEC Industry: A Systematic Literature Review and Future Potentials
Newsha Emaminejad
Reza Akhavian
28
48
0
27 Sep 2021
A User-Centred Framework for Explainable Artificial Intelligence in
  Human-Robot Interaction
A User-Centred Framework for Explainable Artificial Intelligence in Human-Robot Interaction
Marco Matarese
F. Rea
A. Sciutti
32
13
0
27 Sep 2021
Some Critical and Ethical Perspectives on the Empirical Turn of AI
  Interpretability
Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability
Jean-Marie John-Mathews
50
34
0
20 Sep 2021
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework
  and Survey
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Richard Dazeley
Peter Vamplew
Francisco Cruz
32
60
0
20 Aug 2021
Levels of explainable artificial intelligence for human-aligned
  conversational explanations
Levels of explainable artificial intelligence for human-aligned conversational explanations
Richard Dazeley
Peter Vamplew
Cameron Foale
Charlotte Young
Sunil Aryal
F. Cruz
30
90
0
07 Jul 2021
Understanding Consumer Preferences for Explanations Generated by XAI
  Algorithms
Understanding Consumer Preferences for Explanations Generated by XAI Algorithms
Yanou Ramon
T. Vermeire
Olivier Toubia
David Martens
Theodoros Evgeniou
34
10
0
06 Jul 2021
Explainable AI, but explainable to whom?
Explainable AI, but explainable to whom?
Julie Gerlings
Millie Søndergaard Jensen
Arisa Shollo
40
43
0
10 Jun 2021
Explainable AI for medical imaging: Explaining pneumothorax diagnoses
  with Bayesian Teaching
Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching
Tomas Folke
Scott Cheng-Hsin Yang
S. Anderson
Patrick Shafto
21
19
0
08 Jun 2021
A First Look: Towards Explainable TextVQA Models via Visual and Textual
  Explanations
A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations
Varun Nagaraj Rao
Xingjian Zhen
K. Hovsepian
Mingwei Shen
37
18
0
29 Apr 2021
GraphSVX: Shapley Value Explanations for Graph Neural Networks
GraphSVX: Shapley Value Explanations for Graph Neural Networks
Alexandre Duval
Fragkiskos D. Malliaros
FAtt
17
86
0
18 Apr 2021
Contrastive Explanations of Plans Through Model Restrictions
Contrastive Explanations of Plans Through Model Restrictions
Benjamin Krarup
Senka Krivic
Daniele Magazzeni
D. Long
Michael Cashmore
David E. Smith
22
32
0
29 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
29
146
0
26 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
68
415
0
15 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
40
24
0
04 Feb 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
119
0
21 Jan 2021
Explanation from Specification
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
27
0
0
13 Dec 2020
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
33
112
0
10 Dec 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
53
243
0
21 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
A Game-Based Approach for Helping Designers Learn Machine Learning
  Concepts
A Game-Based Approach for Helping Designers Learn Machine Learning Concepts
Chelsea M. Myers
Jiachi Xie
Jichen Zhu
19
4
0
11 Sep 2020
Play MNIST For Me! User Studies on the Effects of Post-Hoc,
  Example-Based Explanations & Error Rates on Debugging a Deep Learning,
  Black-Box Classifier
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford
Eoin M. Kenny
Mark T. Keane
23
6
0
10 Sep 2020
Sequential Explanations with Mental Model-Based Policies
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
36
15
0
17 Jul 2020
Data-Driven Game Development: Ethical Considerations
Data-Driven Game Development: Ethical Considerations
M. S. El-Nasr
Erica Kleinman
22
22
0
18 Jun 2020
Who is this Explanation for? Human Intelligence and Knowledge Graphs for
  eXplainable AI
Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI
I. Celino
12
5
0
27 May 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
21
64
0
18 May 2020
The Grammar of Interactive Explanatory Model Analysis
The Grammar of Interactive Explanatory Model Analysis
Hubert Baniecki
Dariusz Parzych
P. Biecek
24
44
0
01 May 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
27
191
0
22 Feb 2020
Explainability Fact Sheets: A Framework for Systematic Assessment of
  Explainable Approaches
Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches
Kacper Sokol
Peter A. Flach
XAI
19
299
0
11 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
FACE: Feasible and Actionable Counterfactual Explanations
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
15
365
0
20 Sep 2019
On the Semantic Interpretability of Artificial Intelligence Models
On the Semantic Interpretability of Artificial Intelligence Models
V. S. Silva
André Freitas
Siegfried Handschuh
AI4CE
25
8
0
09 Jul 2019
12
Next