ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07552
  4. Cited By
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

20 June 2018
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
    FaML
ArXivPDFHTML

Papers citing "Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems"

11 / 11 papers shown
Title
A Mechanistic Explanatory Strategy for XAI
A Mechanistic Explanatory Strategy for XAI
Marcin Rabiza
46
1
0
02 Nov 2024
How should AI decisions be explained? Requirements for Explanations from
  the Perspective of European Law
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Benjamin Frész
Elena Dubovitskaya
Danilo Brajovic
Marco F. Huber
Christian Horz
41
7
0
19 Apr 2024
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
13
0
0
20 May 2023
Flexible and Inherently Comprehensible Knowledge Representation for
  Data-Efficient Learning and Trustworthy Human-Machine Teaming in
  Manufacturing Environments
Flexible and Inherently Comprehensible Knowledge Representation for Data-Efficient Learning and Trustworthy Human-Machine Teaming in Manufacturing Environments
Vedran Galetić
Alistair Nottle
17
1
0
19 May 2023
The Influence of Explainable Artificial Intelligence: Nudging Behaviour
  or Boosting Capability?
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
21
1
0
05 Oct 2022
On Two XAI Cultures: A Case Study of Non-technical Explanations in
  Deployed AI System
On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System
Helen Jiang
Erwen Senge
25
7
0
02 Dec 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
27
85
0
28 Jul 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
126
119
0
21 Jan 2021
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
19
55
0
04 Sep 2020
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Xia Hu
FaML
22
1,071
0
31 Jul 2018
1