ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,335 papers shown
Title
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAIFAtt
365
34
0
07 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process MonitoringStudies in Computational Intelligence (SCI), 2020
Nijat Mehdiyev
Peter Fettke
AI4TS
210
63
0
04 Sep 2020
Why should I not follow you? Reasons For and Reasons Against in
  Responsible Recommender Systems
Why should I not follow you? Reasons For and Reasons Against in Responsible Recommender Systems
G. Polleti
Douglas Luan de Souza
Fabio Gagliardi Cozman
127
1
0
03 Sep 2020
Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time
  for Interactive Data Systems
Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems
Jeremy E. Block
Eric D. Ragan
120
8
0
02 Sep 2020
Machine Reasoning Explainability
Machine Reasoning Explainability
K. Čyras
R. Badrinath
S. Mohalik
A. Mujumdar
Alexandros Nikou
Alessandro Previti
Vaishnavi Sundararajan
Aneta Vulgarakis Feljan
LRM
275
13
0
01 Sep 2020
Counterfactual Explanations for Machine Learning on Multivariate Time
  Series Data
Counterfactual Explanations for Machine Learning on Multivariate Time Series Data
E. Ates
Burak Aksar
V. Leung
A. Coskun
AI4TS
256
90
0
25 Aug 2020
A Survey of Knowledge-based Sequential Decision Making under Uncertainty
A Survey of Knowledge-based Sequential Decision Making under Uncertainty
Shiqi Zhang
Mohan Sridharan
220
16
0
19 Aug 2020
DECE: Decision Explorer with Counterfactual Explanations for Machine
  Learning Models
DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models
Furui Cheng
Yao Ming
Huamin Qu
CMLHAI
122
122
0
19 Aug 2020
Mediating Community-AI Interaction through Situated Explanation: The
  Case of AI-Led Moderation
Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation
Yubo Kou
Xinning Gui
95
45
0
19 Aug 2020
Tackling COVID-19 through Responsible AI Innovation: Five Steps in the
  Right Direction
Tackling COVID-19 through Responsible AI Innovation: Five Steps in the Right Direction
David Leslie
298
73
0
15 Aug 2020
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time
  and Delay
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Sasha Rubin
Thomas Gerspacher
Martin C. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
229
69
0
13 Aug 2020
Bias and Discrimination in AI: a cross-disciplinary perspective
Bias and Discrimination in AI: a cross-disciplinary perspectiveIEEE technology & society magazine (TS), 2020
Xavier Ferrer
Tom van Nuenen
Jose Such
Mark Coté
Natalia Criado
FaML
152
192
0
11 Aug 2020
Making Sense of CNNs: Interpreting Deep Representations & Their
  Invariances with INNs
Making Sense of CNNs: Interpreting Deep Representations & Their Invariances with INNs
Robin Rombach
Patrick Esser
Bjorn Ommer
147
18
0
04 Aug 2020
Exploiting Game Theory for Analysing Justifications
Exploiting Game Theory for Analysing Justifications
Simon Marynissen
B. Bogaerts
M. Denecker
58
4
0
04 Aug 2020
IntroVAC: Introspective Variational Classifiers for Learning
  Interpretable Latent Subspaces
IntroVAC: Introspective Variational Classifiers for Learning Interpretable Latent Subspaces
Marco Maggipinto
M. Terzi
Gian Antonio Susto
249
1
0
03 Aug 2020
A Causal Lens for Peeking into Black Box Predictive Models: Predictive
  Model Interpretation via Causal Attribution
A Causal Lens for Peeking into Black Box Predictive Models: Predictive Model Interpretation via Causal Attribution
A. Khademi
Vasant Honavar
CML
126
9
0
01 Aug 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategiesJournal of Biomedical Informatics (JBI), 2020
A. Markus
J. Kors
P. Rijnbeek
233
569
0
31 Jul 2020
Neural Temporal Point Processes For Modelling Electronic Health Records
Neural Temporal Point Processes For Modelling Electronic Health Records
Joseph Enguehard
Dan Busbridge
Adam James Bozson
Claire Woodcock
Nils Y. Hammerla
218
51
0
27 Jul 2020
Interpretabilité des modèles : état des lieux des méthodes et
  application à lássurance
Interpretabilité des modèles : état des lieux des méthodes et application à lássurance
Dimitri Delcaillau
A. Ly
Franck Vermet
Alizé Papp
76
1
0
25 Jul 2020
Joint Mind Modeling for Explanation Generation in Complex Human-Robot
  Collaborative Tasks
Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative TasksIEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2020
Xiaofeng Gao
Ran Gong
Yizhou Zhao
Shu Wang
Tianmin Shu
Song-Chun Zhu
173
42
0
24 Jul 2020
Memory networks for consumer protection:unfairness exposed
Memory networks for consumer protection:unfairness exposed
Federico Ruggeri
F. Lagioia
Marco Lippi
Paolo Torroni
109
0
0
24 Jul 2020
Technologies for Trustworthy Machine Learning: A Survey in a
  Socio-Technical Context
Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context
Ehsan Toreini
Mhairi Aitken
Kovila P. L. Coopamootoo
Karen Elliott
Vladimiro González-Zelaya
P. Missier
Magdalene Ng
Aad van Moorsel
286
19
0
17 Jul 2020
On quantitative aspects of model interpretability
On quantitative aspects of model interpretability
An-phi Nguyen
María Rodríguez Martínez
180
130
0
15 Jul 2020
Cause vs. Effect in Context-Sensitive Prediction of Business Process
  Instances
Cause vs. Effect in Context-Sensitive Prediction of Business Process InstancesInformation Systems (Inf. Syst.), 2020
Jens Brunk
M. Stierle
Leon Papke
K. Revoredo
Martin Matzner
J. Becker
120
25
0
15 Jul 2020
XAlgo: a Design Probe of Explaining Algorithms' Internal States via
  Question-Answering
XAlgo: a Design Probe of Explaining Algorithms' Internal States via Question-Answering
Juan Rebanal
Yuqi Tang
Jordan Combitsis
Xiang Ánthony' Chen
162
3
0
14 Jul 2020
Machine Learning Explainability for External Stakeholders
Machine Learning Explainability for External Stakeholders
Umang Bhatt
Mckane Andrus
Adrian Weller
Alice Xiang
FaMLSILM
147
63
0
10 Jul 2020
Model Distillation for Revenue Optimization: Interpretable Personalized
  Pricing
Model Distillation for Revenue Optimization: Interpretable Personalized Pricing
Max Biggs
Wei-Ju Sun
M. Ettl
289
41
0
03 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
321
719
0
01 Jul 2020
Unifying Model Explainability and Robustness via Machine-Checkable
  Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
163
4
0
01 Jul 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
367
718
0
26 Jun 2020
Background Knowledge Injection for Interpretable Sequence Classification
Background Knowledge Injection for Interpretable Sequence Classification
S. Gsponer
Luca Costabello
Chan Le Van
Sumit Pai
Christophe Guéret
Georgiana Ifrim
Freddy Lecue
118
1
0
25 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiersNeural Information Processing Systems (NeurIPS), 2020
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
265
76
0
24 Jun 2020
Explainable robotic systems: Understanding goal-driven actions in a
  reinforcement learning scenario
Explainable robotic systems: Understanding goal-driven actions in a reinforcement learning scenario
Francisco Cruz
Richard Dazeley
Peter Vamplew
Ithan Moreira
277
45
0
24 Jun 2020
Online Handbook of Argumentation for AI: Volume 1
Online Handbook of Argumentation for AI: Volume 1
Ohaai Collaboration Federico Castagna
Federico Castagna
T. Kampik
Atefeh Keshavarzi Zafarghandi
Mickael Lafages
...
Samy Sá
Stefan Sarkadi
Joseph Singleton
Kenneth Skiba
A. Xydis
162
1
0
22 Jun 2020
Non-repudiable provenance for clinical decision support systems
Non-repudiable provenance for clinical decision support systems
Elliot Fairweather
Rudolf Wittner
Martin Chapman
P. Holub
V. Curcin
37
8
0
19 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
206
115
0
19 Jun 2020
How does this interaction affect me? Interpretable attribution for
  feature interactions
How does this interaction affect me? Interpretable attribution for feature interactions
Michael Tsang
Sirisha Rambhatla
Yan Liu
FAtt
190
97
0
19 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
386
696
0
16 Jun 2020
Contestable Black Boxes
Contestable Black Boxes
Andrea Aler Tubella
Andreas Theodorou
Virginia Dignum
Loizos Michael
91
17
0
09 Jun 2020
Interpretable Classification of Bacterial Raman Spectra with Knockoff
  Wavelets
Interpretable Classification of Bacterial Raman Spectra with Knockoff Wavelets
Charmaine Chia
Matteo Sesia
Chi-Sing Ho
S. Jeffrey
J. Dionne
Emmanuel J. Candès
R. Howe
218
6
0
08 Jun 2020
From Checking to Inference: Actual Causality Computations as
  Optimization Problems
From Checking to Inference: Actual Causality Computations as Optimization Problems
Amjad Ibrahim
A. Pretschner
LRM
171
15
0
05 Jun 2020
Local Interpretability of Calibrated Prediction Models: A Case of Type 2
  Diabetes Mellitus Screening Test
Local Interpretability of Calibrated Prediction Models: A Case of Type 2 Diabetes Mellitus Screening Test
Simon Kocbek
Primož Kocbek
Leona Cilar
Gregor Stiglic
101
2
0
02 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social AttributionTransactions of the Association for Computational Linguistics (TACL), 2020
Alon Jacovi
Yoav Goldberg
275
111
0
01 Jun 2020
Explanations of Black-Box Model Predictions by Contextual Importance and
  Utility
Explanations of Black-Box Model Predictions by Contextual Importance and Utility
S. Anjomshoae
Kary Främling
A. Najjar
154
38
0
30 May 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
551
300
0
29 May 2020
A Performance-Explainability Framework to Benchmark Machine Learning
  Methods: Application to Multivariate Time Series Classifiers
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel
Véronique Masson
Elisa Fromont
AI4TS
255
19
0
29 May 2020
Who is this Explanation for? Human Intelligence and Knowledge Graphs for
  eXplainable AI
Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI
I. Celino
115
6
0
27 May 2020
Good Counterfactuals and Where to Find Them: A Case-Based Technique for
  Generating Counterfactuals for Explainable AI (XAI)
Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)International Conference on Case-Based Reasoning (ICCBR), 2020
Mark T. Keane
Barry Smyth
CML
168
162
0
26 May 2020
The Skincare project, an interactive deep learning system for
  differential diagnosis of malignant skin lesions. Technical Report
The Skincare project, an interactive deep learning system for differential diagnosis of malignant skin lesions. Technical Report
Daniel Sonntag
Fabrizio Nunnari
H. Profitlich
128
12
0
19 May 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
300
71
0
18 May 2020
Previous
123...222324252627
Next