ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.09734
  4. Cited By
One Explanation Does Not Fit All: The Promise of Interactive
  Explanations for Machine Learning Transparency

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

27 January 2020
Kacper Sokol
Peter A. Flach
ArXivPDFHTML

Papers citing "One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency"

30 / 30 papers shown
Title
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
56
2
0
05 Oct 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
38
1
0
22 Sep 2024
Data Science Principles for Interpretable and Explainable AI
Data Science Principles for Interpretable and Explainable AI
Kris Sankaran
FaML
50
0
0
17 May 2024
Aligning Knowledge Graphs Provided by Humans and Generated from Neural
  Networks in Specific Tasks
Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks
Tangrui Li
Jun Zhou
43
0
0
23 Apr 2024
SurvBeX: An explanation method of the machine learning survival models
  based on the Beran estimator
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
37
4
0
07 Aug 2023
Interactive Explanation with Varying Level of Details in an Explainable
  Scientific Literature Recommender System
Interactive Explanation with Varying Level of Details in an Explainable Scientific Literature Recommender System
Mouadh Guesmi
Mohamed Amine Chatti
Shoeb Joarder
Qurat Ul Ain
R. Alatrash
Clara Siepmann
Tannaz Vahidi
16
9
0
09 Jun 2023
Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Kacper Sokol
E. Small
Yueqing Xuan
45
5
0
05 Jun 2023
Reason to explain: Interactive contrastive explanations (REASONX)
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State
Salvatore Ruggieri
Franco Turini
LRM
35
1
0
29 May 2023
Visualization for Recommendation Explainability: A Survey and New
  Perspectives
Visualization for Recommendation Explainability: A Survey and New Perspectives
Mohamed Amine Chatti
Mouadh Guesmi
Arham Muslim
XAI
HAI
LRM
33
5
0
19 May 2023
One Explanation Does Not Fit XIL
One Explanation Does Not Fit XIL
Felix Friedrich
David Steinmann
Kristian Kersting
LRM
37
2
0
14 Apr 2023
Explaining Groups of Instances Counterfactually for XAI: A Use Case,
  Algorithm and User Study for Group-Counterfactuals
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Greta Warren
Markt. Keane
Christophe Guéret
Eoin Delaney
26
13
0
16 Mar 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
38
7
0
07 Feb 2023
Behaviour Trees for Creating Conversational Explanation Experiences
Behaviour Trees for Creating Conversational Explanation Experiences
A. Wijekoon
D. Corsar
Nirmalie Wiratunga
27
3
0
11 Nov 2022
Explanations Based on Item Response Theory (eXirt): A Model-Specific
  Method to Explain Tree-Ensemble Model in Trust Perspective
Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective
José de Sousa Ribeiro Filho
Lucas F. F. Cardoso
R. Silva
Vitor Cirilo Araujo Santos
Nikolas Carneiro
Ronnie Cley de Oliveira Alves
18
4
0
18 Oct 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
31
62
0
29 Jul 2022
Why we do need Explainable AI for Healthcare
Why we do need Explainable AI for Healthcare
Giovanni Cina
Tabea E. Rober
Rob Goedhart
Ilker Birbil
37
14
0
30 Jun 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
50
16
0
13 Jun 2022
Let's Go to the Alien Zoo: Introducing an Experimental Framework to
  Study Usability of Counterfactual Explanations for Machine Learning
Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning
Ulrike Kuhl
André Artelt
Barbara Hammer
38
18
0
06 May 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
41
15
0
27 Jan 2022
Explainability Is in the Mind of the Beholder: Establishing the
  Foundations of Explainable Artificial Intelligence
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
44
21
0
29 Dec 2021
AI Ethics Principles in Practice: Perspectives of Designers and
  Developers
AI Ethics Principles in Practice: Perspectives of Designers and Developers
Conrad Sanderson
David M. Douglas
Qinghua Lu
Emma Schleiger
Jon Whittle
J. Lacey
G. Newnham
S. Hajkowicz
Cathy J. Robinson
David Hansen
FaML
33
46
0
14 Dec 2021
A User-Centred Framework for Explainable Artificial Intelligence in
  Human-Robot Interaction
A User-Centred Framework for Explainable Artificial Intelligence in Human-Robot Interaction
Marco Matarese
F. Rea
A. Sciutti
32
13
0
27 Sep 2021
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework
  and Survey
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Richard Dazeley
Peter Vamplew
Francisco Cruz
32
60
0
20 Aug 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
27
28
0
21 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
34
140
0
17 May 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
45
25
0
17 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
68
415
0
15 Feb 2021
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
33
112
0
10 Dec 2020
Explainable Empirical Risk Minimization
Explainable Empirical Risk Minimization
Linli Zhang
Georgios Karakasidis
Arina Odnoblyudova
Leyla Dogruel
Alex Jung
27
5
0
03 Sep 2020
The Grammar of Interactive Explanatory Model Analysis
The Grammar of Interactive Explanatory Model Analysis
Hubert Baniecki
Dariusz Parzych
P. Biecek
24
44
0
01 May 2020
1