ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,335 papers shown
Title
Bandits for Learning to Explain from Explanations
Bandits for Learning to Explain from Explanations
Freya Behrens
Stefano Teso
Davide Mottin
FAtt
110
1
0
07 Feb 2021
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
CF-GNNExplainer: Counterfactual Explanations for Graph Neural NetworksInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Ana Lucic
Maartje ter Hoeve
Gabriele Tolomei
Maarten de Rijke
Fabrizio Silvestri
513
172
0
05 Feb 2021
"I Don't Think So": Summarizing Policy Disagreements for Agent
  Comparison
"I Don't Think So": Summarizing Policy Disagreements for Agent ComparisonAAAI Conference on Artificial Intelligence (AAAI), 2021
Yotam Amitai
Ofra Amir
LLMAG
125
13
0
05 Feb 2021
AI Development for the Public Interest: From Abstraction Traps to
  Sociotechnical Risks
AI Development for the Public Interest: From Abstraction Traps to Sociotechnical RisksInternational Symposium on Technology and Society (ISTAS), 2020
Mckane Andrus
Sarah Dean
T. Gilbert
Nathan Lambert
Tom Zick
140
11
0
04 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
256
27
0
04 Feb 2021
When Can Models Learn From Explanations? A Formal Framework for
  Understanding the Roles of Explanation Data
When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
Peter Hase
Joey Tianyi Zhou
XAI
385
91
0
03 Feb 2021
Directive Explanations for Actionable Explainability in Machine Learning
  Applications
Directive Explanations for Actionable Explainability in Machine Learning Applications
Ronal Singh
Paul Dourish
Piers Howe
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
120
42
0
03 Feb 2021
Evaluating the Interpretability of Generative Models by Interactive
  Reconstruction
Evaluating the Interpretability of Generative Models by Interactive ReconstructionInternational Conference on Human Factors in Computing Systems (CHI), 2021
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
273
52
0
02 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical
  Decisions: A Sociotechnical Lens
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical LensInternational Conference on Human Factors in Computing Systems (CHI), 2021
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
188
159
0
01 Feb 2021
Counterfactual State Explanations for Reinforcement Learning Agents via
  Generative Deep Learning
Counterfactual State Explanations for Reinforcement Learning Agents via Generative Deep LearningArtificial Intelligence (AI), 2021
Matthew Lyle Olson
Roli Khanna
Lawrence Neal
Fuxin Li
Weng-Keen Wong
CML
189
86
0
29 Jan 2021
Explaining Natural Language Processing Classifiers with Occlusion and
  Language Modeling
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
164
2
0
28 Jan 2021
Cognitive Perspectives on Context-based Decisions and Explanations
Cognitive Perspectives on Context-based Decisions and Explanations
Marcus Westberg
Kary Främling
39
2
0
25 Jan 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their NeedsInternational Conference on Human Factors in Computing Systems (CHI), 2021
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
256
144
0
24 Jan 2021
Show or Suppress? Managing Input Uncertainty in Machine Learning Model
  Explanations
Show or Suppress? Managing Input Uncertainty in Machine Learning Model ExplanationsArtificial Intelligence (AI), 2021
Danding Wang
Wencan Zhang
Brian Y. Lim
FAtt
97
27
0
23 Jan 2021
Explainable Artificial Intelligence Approaches: A Survey
Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Mohiuddin Ahmed
XAI
165
122
0
23 Jan 2021
A Few Good Counterfactuals: Generating Interpretable, Plausible and
  Diverse Counterfactual Explanations
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual ExplanationsInternational Conference on Case-Based Reasoning (ICCBR), 2021
Barry Smyth
Mark T. Keane
CML
234
34
0
22 Jan 2021
GLocalX -- From Local to Global Explanations of Black Box AI Models
GLocalX -- From Local to Global Explanations of Black Box AI ModelsArtificial Intelligence (AI), 2021
Mattia Setzu
Riccardo Guidotti
A. Monreale
Franco Turini
D. Pedreschi
F. Giannotti
264
138
0
19 Jan 2021
Understanding the Effect of Out-of-distribution Examples and Interactive
  Explanations on Human-AI Decision Making
Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making
Han Liu
Vivian Lai
Chenhao Tan
364
135
0
13 Jan 2021
Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systemsInternational Conference on Human Factors in Computing Systems (CHI), 2021
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
252
474
0
12 Jan 2021
Machine Learning Uncertainty as a Design Material: A
  Post-Phenomenological Inquiry
Machine Learning Uncertainty as a Design Material: A Post-Phenomenological InquiryInternational Conference on Human Factors in Computing Systems (CHI), 2021
J. Benjamin
Arne Berger
Nick Merrill
James Pierce
198
102
0
11 Jan 2021
Argument Schemes and Dialogue for Explainable Planning
Argument Schemes and Dialogue for Explainable Planning
Quratul-ain Mahesar
Simon Parsons
67
2
0
07 Jan 2021
How Much Automation Does a Data Scientist Want?
How Much Automation Does a Data Scientist Want?
Dakuo Wang
Q. V. Liao
Yunfeng Zhang
Udayan Khurana
Horst Samulowitz
Soya Park
Michael J. Muller
Lisa Amini
AI4CE
207
57
0
07 Jan 2021
Predicting Illness for a Sustainable Dairy Agriculture: Predicting and
  Explaining the Onset of Mastitis in Dairy Cows
Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows
C. Ryan
Christophe Gúeret
D. Berry
Medb Corcoran
Mark T. Keane
Brian Mac Namee
147
9
0
06 Jan 2021
One-shot Policy Elicitation via Semantic Reward Manipulation
One-shot Policy Elicitation via Semantic Reward Manipulation
Aaquib Tabrez
Ryan Leonard
Bradley Hayes
137
2
0
06 Jan 2021
Outcome-Explorer: A Causality Guided Interactive Visual Interface for
  Interpretable Algorithmic Decision Making
Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision MakingIEEE Transactions on Visualization and Computer Graphics (TVCG), 2021
Md. Naimul Hoque
Klaus Mueller
CML
314
36
0
03 Jan 2021
Modeling Disclosive Transparency in NLP Application Descriptions
Modeling Disclosive Transparency in NLP Application DescriptionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Michael Stephen Saxon
Sharon Levy
Xinyi Wang
Alon Albalak
Wenjie Wang
306
7
0
02 Jan 2021
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
  Improving Models
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Tongshuang Wu
Marco Tulio Ribeiro
Jeffrey Heer
Daniel S. Weld
319
280
0
01 Jan 2021
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Ana Valeria González
Gagan Bansal
Angela Fan
Robin Jia
Yashar Mehdad
Srini Iyer
AAML
183
24
0
30 Dec 2020
dalex: Responsible Machine Learning with Interactive Explainability and
  Fairness in Python
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in PythonJournal of machine learning research (JMLR), 2020
Hubert Baniecki
Wojciech Kretowicz
Piotr Piątyszek
J. Wiśniewski
P. Biecek
FaML
242
123
0
28 Dec 2020
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)Findings (Findings), 2020
Alexis Ross
Ana Marasović
Matthew E. Peters
363
128
0
27 Dec 2020
Brain-inspired Search Engine Assistant based on Knowledge Graph
Brain-inspired Search Engine Assistant based on Knowledge GraphIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Xuejiao Zhao
Huanhuan Chen
Zhenchang Xing
Chunyan Miao
150
42
0
25 Dec 2020
GANterfactual - Counterfactual Explanations for Medical Non-Experts
  using Generative Adversarial Learning
GANterfactual - Counterfactual Explanations for Medical Non-Experts using Generative Adversarial LearningFrontiers in Artificial Intelligence (FAI), 2020
Silvan Mertes
Tobias Huber
Katharina Weitz
Alexander Heimerl
Elisabeth André
GANAAMLMedIm
288
102
0
22 Dec 2020
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep
  Learning
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep LearningJournal of Management Information Systems (JMIS), 2020
Jiaheng Xie
Xinyu Liu
HAI
402
14
0
21 Dec 2020
On Relating 'Why?' and 'Why Not?' Explanations
On Relating 'Why?' and 'Why Not?' Explanations
Alexey Ignatiev
Nina Narodytska
Nicholas M. Asher
Sasha Rubin
XAIFAttLRM
181
30
0
21 Dec 2020
Semantics and explanation: why counterfactual explanations produce
  adversarial examples in deep neural networks
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks
Kieran Browne
Ben Swift
AAMLGAN
121
32
0
18 Dec 2020
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from
  Practice to Theory
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory
Nazanin Fouladgar
Kary Främling
XAI
59
5
0
17 Dec 2020
On Exploiting Hitting Sets for Model Reconciliation
On Exploiting Hitting Sets for Model ReconciliationAAAI Conference on Artificial Intelligence (AAAI), 2020
Stylianos Loukas Vasileiou
Alessandro Previti
William Yeoh
153
28
0
16 Dec 2020
Explanation from Specification
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
115
0
0
13 Dec 2020
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
102
138
0
10 Dec 2020
CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical
  Graph Representation Learning
CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical Graph Representation LearningNeural Networks (NN), 2020
Haoteng Tang
Guixiang Ma
Lifang He
Heng-Chiao Huang
Chen Tang
GNN
211
26
0
10 Dec 2020
Influence-Driven Explanations for Bayesian Network Classifiers
Influence-Driven Explanations for Bayesian Network ClassifiersPacific Rim International Conference on Artificial Intelligence (PRICAI), 2020
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
246
9
0
10 Dec 2020
Deep Argumentative Explanations
Deep Argumentative Explanations
Emanuele Albini
Piyawat Lertvittayakumjorn
Antonio Rago
Francesca Toni
AAML
244
5
0
10 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learningInternational Conference on Human Factors in Computing Systems (CHI), 2020
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
315
19
0
10 Dec 2020
Competition analysis on the over-the-counter credit default swap market
Competition analysis on the over-the-counter credit default swap market
L. Abraham
127
1
0
03 Dec 2020
Reviewing the Need for Explainable Artificial Intelligence (xAI)
Reviewing the Need for Explainable Artificial Intelligence (xAI)Hawaii International Conference on System Sciences (HICSS), 2020
Julie Gerlings
Arisa Shollo
Ioanna D. Constantiou
320
81
0
02 Dec 2020
Why Did the Robot Cross the Road? A User Study of Explanation in
  Human-Robot Interaction
Why Did the Robot Cross the Road? A User Study of Explanation in Human-Robot InteractionInteracción (HCI International), 2020
Zachary Taschdjian
60
0
0
30 Nov 2020
Investigating Human Response, Behaviour, and Preference in Joint-Task
  Interaction
Investigating Human Response, Behaviour, and Preference in Joint-Task Interaction
A. Lindsay
B. Craenen
S. Dalzel-Job
Robin L. Hill
Ronald P. A. Petrick
64
4
0
27 Nov 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their ExplanationsComputer Vision and Pattern Recognition (CVPR), 2020
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
484
126
0
25 Nov 2020
Model Elicitation through Direct Questioning
Model Elicitation through Direct Questioning
Sachin Grover
David E. Smith
S. Kambhampati
115
4
0
24 Nov 2020
The Interpretable Dictionary in Sparse Coding
The Interpretable Dictionary in Sparse Coding
Edward J. Kim
Connor Onweller
Andrew O'Brien
Kathleen F. McCoy
53
2
0
24 Nov 2020
Previous
123...202122...252627
Next