ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,335 papers shown
Title
A Bayesian Account of Measures of Interpretability in Human-AI
  Interaction
A Bayesian Account of Measures of Interpretability in Human-AI Interaction
S. Sreedharan
Anagha Kulkarni
Tathagata Chakraborti
David E. Smith
S. Kambhampati
167
10
0
22 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model ExplanationJournal of machine learning research (JMLR), 2020
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
349
299
0
21 Nov 2020
Iterative Planning with Plan-Space Explanations: A Tool and User Study
Iterative Planning with Plan-Space Explanations: A Tool and User Study
Rebecca Eifler
Jörg Hoffmann
LRM
54
2
0
19 Nov 2020
RADAR-X: An Interactive Mixed Initiative Planning Interface Pairing
  Contrastive Explanations and Revised Plan Suggestions
RADAR-X: An Interactive Mixed Initiative Planning Interface Pairing Contrastive Explanations and Revised Plan SuggestionsInternational Conference on Automated Planning and Scheduling (ICAPS), 2020
Kaya Stechly
S. Sreedharan
Sailik Sengupta
Subbarao Kambhampati
156
9
0
19 Nov 2020
A Survey on the Explainability of Supervised Machine Learning
A Survey on the Explainability of Supervised Machine LearningJournal of Artificial Intelligence Research (JAIR), 2020
Nadia Burkart
Marco F. Huber
FaMLXAI
224
878
0
16 Nov 2020
Qualitative Investigation in Explainable Artificial Intelligence: A Bit
  More Insight from Social Science
Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science
Adam J. Johs
Denise E. Agosto
Rosina O. Weber
172
7
0
13 Nov 2020
Towards Unifying Feature Attribution and Counterfactual Explanations:
  Different Means to the Same End
Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End
R. Mothilal
Divyat Mahajan
Chenhao Tan
Amit Sharma
FAttCML
413
119
0
10 Nov 2020
Interpretable collaborative data analysis on distributed data
Interpretable collaborative data analysis on distributed data
A. Imakura
Hiroaki Inaba
Yukihiko Okada
Tetsuya Sakurai
FedML
87
27
0
09 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
330
40
0
06 Nov 2020
Digital Nudging with Recommender Systems: Survey and Future Directions
Digital Nudging with Recommender Systems: Survey and Future Directions
Mathias Jesse
Dietmar Jannach
185
158
0
06 Nov 2020
Necessary and Sufficient Explanations in Abstract Argumentation
Necessary and Sufficient Explanations in Abstract Argumentation
A. Borg
Floris Bex
45
2
0
04 Nov 2020
Towards Personalized Explanation of Robot Path Planning via User
  Feedback
Towards Personalized Explanation of Robot Path Planning via User Feedback
Kayla Boggess
Shenghui Chen
Lu Feng
112
4
0
01 Nov 2020
Comprehensible Counterfactual Explanation on Kolmogorov-Smirnov Test
Comprehensible Counterfactual Explanation on Kolmogorov-Smirnov Test
Zicun Cong
Lingyang Chu
Yu Yang
Jian Pei
136
0
0
01 Nov 2020
ExplanationLP: Abductive Reasoning for Explainable Science Question
  Answering
ExplanationLP: Abductive Reasoning for Explainable Science Question Answering
Mokanarangan Thayaparan
Marco Valentino
André Freitas
LRM
175
9
0
25 Oct 2020
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text RationalesConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
636
194
0
24 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
209
7
0
23 Oct 2020
Model Interpretability through the Lens of Computational Complexity
Model Interpretability through the Lens of Computational ComplexityNeural Information Processing Systems (NeurIPS), 2020
Pablo Barceló
Mikaël Monet
Jorge A. Pérez
Bernardo Subercaseaux
323
109
0
23 Oct 2020
On Explaining Decision Trees
On Explaining Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
202
99
0
21 Oct 2020
Axiom Learning and Belief Tracing for Transparent Decision Making in
  Robotics
Axiom Learning and Belief Tracing for Transparent Decision Making in Robotics
Tiago Mota
Mohan Sridharan
105
5
0
20 Oct 2020
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys (ACM CSUR), 2020
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
672
237
0
20 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
J. Herbinger
AI4TSAI4CE
344
462
0
19 Oct 2020
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
  Decision-making
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
Charvi Rastogi
Yunfeng Zhang
Dennis L. Wei
Kush R. Varshney
Amit Dhurandhar
Richard J. Tomsett
HAI
214
153
0
15 Oct 2020
Natural Language Rationales with Full-Stack Visual Reasoning: From
  Pixels to Semantic Frames to Commonsense Graphs
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Ana Marasović
Chandra Bhagavatula
J. S. Park
Ronan Le Bras
Noah A. Smith
Yejin Choi
ReLMLRM
204
63
0
15 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
621
536
0
15 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAILRM
235
195
0
12 Oct 2020
Towards a Conversational Measure of Trust
Towards a Conversational Measure of Trust
Mengyao Li
Areen Alsaid
Sofia I. Noejovich
E. V. Cross
John D. Lee
HILM
62
4
0
10 Oct 2020
A Series of Unfortunate Counterfactual Events: the Role of Time in
  Counterfactual Explanations
A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations
Andrea Ferrario
M. Loi
148
5
0
09 Oct 2020
Exploring Sensitivity of ICF Outputs to Design Parameters in Experiments
  Using Machine Learning
Exploring Sensitivity of ICF Outputs to Design Parameters in Experiments Using Machine LearningIEEE Transactions on Plasma Science (IEEE Trans. Plasma Sci.), 2020
Julia B. Nakhleh
M. G. Fernández-Godino
M. Grosskopf
B. Wilson
J. Kline
G. Srinivasan
80
10
0
08 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
331
182
0
08 Oct 2020
Simplifying the explanation of deep neural networks with sufficient and
  necessary feature-sets: case of text classification
Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification
Florentin Flambeau Jiechieu Kameni
Norbert Tsopzé
XAIFAttMedIm
89
1
0
08 Oct 2020
Interpretable Sequence Classification via Discrete Optimization
Interpretable Sequence Classification via Discrete Optimization
Maayan Shvo
Andrew C. Li
Rodrigo Toro Icarte
Sheila A. McIlraith
91
26
0
06 Oct 2020
Efficient computation of contrastive explanations
Efficient computation of contrastive explanationsIEEE International Joint Conference on Neural Network (IJCNN), 2020
André Artelt
Barbara Hammer
177
9
0
06 Oct 2020
Explanation Ontology: A Model of Explanations for User-Centered AI
Explanation Ontology: A Model of Explanations for User-Centered AIInternational Workshop on the Semantic Web (SW), 2020
Shruthi Chari
Oshani Seneviratne
Daniel Gruen
Morgan Foreman
Amar K. Das
D. McGuinness
XAI
146
64
0
04 Oct 2020
A Survey on Explainability in Machine Reading Comprehension
A Survey on Explainability in Machine Reading Comprehension
Mokanarangan Thayaparan
Marco Valentino
André Freitas
FaML
302
52
0
01 Oct 2020
Explainable AI without Interpretable Model
Explainable AI without Interpretable Model
Kary Framling
ELM
71
8
0
29 Sep 2020
Instance-based Counterfactual Explanations for Time Series
  Classification
Instance-based Counterfactual Explanations for Time Series Classification
Eoin Delaney
Derek Greene
Mark T. Keane
CMLAI4TS
192
112
0
28 Sep 2020
Disentangled Neural Architecture Search
Disentangled Neural Architecture SearchIEEE International Joint Conference on Neural Network (IJCNN), 2020
Xinyue Zheng
Xuanjing Huang
Qigang Wang
Peng Wang
AI4CE
143
4
0
24 Sep 2020
Local Post-Hoc Explanations for Predictive Process Monitoring in
  Manufacturing
Local Post-Hoc Explanations for Predictive Process Monitoring in ManufacturingEuropean Conference on Information Systems (ECIS), 2020
Nijat Mehdiyev
Peter Fettke
213
12
0
22 Sep 2020
ALICE: Active Learning with Contrastive Natural Language Explanations
ALICE: Active Learning with Contrastive Natural Language ExplanationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Weixin Liang
James Zou
Zhou Yu
VLM
207
54
0
22 Sep 2020
Survey of explainable machine learning with visual and granular methods
  beyond quasi-explanations
Survey of explainable machine learning with visual and granular methods beyond quasi-explanationsStudies in Computational Intelligence (SCI), 2020
Boris Kovalerchuk
M. Ahmad
University of Washington Tacoma
209
50
0
21 Sep 2020
Causal Rule Ensemble: Interpretable Discovery and Inference of
  Heterogeneous Treatment Effects
Causal Rule Ensemble: Interpretable Discovery and Inference of Heterogeneous Treatment Effects
Falco J. Bargagli-Stoffi
Riccardo Cadei
Kwonsang Lee
Francesca Dominici
CML
487
25
0
18 Sep 2020
Principles and Practice of Explainable Machine Learning
Principles and Practice of Explainable Machine LearningFrontiers in Big Data (Front. Big Data), 2020
Vaishak Belle
I. Papantonis
FaML
218
521
0
18 Sep 2020
Addressing Cognitive Biases in Augmented Business Decision Systems
Addressing Cognitive Biases in Augmented Business Decision Systems
Thomas Baudel
Manon Verbockhaven
Guillaume Roy
Victoire Cousergue
Rida Laarach
83
3
0
17 Sep 2020
GLUCOSE: GeneraLized and COntextualized Story Explanations
GLUCOSE: GeneraLized and COntextualized Story ExplanationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
N. Mostafazadeh
Aditya Kalyanpur
Lori Moon
David W. Buchanan
Lauren Berkowitz
Or Biran
Jennifer Chu-Carroll
304
127
0
16 Sep 2020
Should We Trust (X)AI? Design Dimensions for Structured Experimental
  Evaluations
Should We Trust (X)AI? Design Dimensions for Structured Experimental Evaluations
F. Sperrle
Mennatallah El-Assady
G. Guo
Duen Horng Chau
Alex Endert
Daniel A. Keim
151
23
0
14 Sep 2020
Is there a role for statistics in artificial intelligence?
Is there a role for statistics in artificial intelligence?Advances in Data Analysis and Classification (ADAC), 2020
Sarah Friedrich
G. Antes
S. Behr
Harald Binder
W. Brannath
...
H. Leitgöb
Markus Pauly
A. Steland
A. Wilhelm
T. Friede
152
59
0
13 Sep 2020
TREX: Tree-Ensemble Representer-Point Explanations
TREX: Tree-Ensemble Representer-Point Explanations
Jonathan Brophy
Daniel Lowd
TDI
196
7
0
11 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial ExamplesMinds and Machines (MM), 2020
Timo Freiesleben
GAN
452
69
0
11 Sep 2020
On Generating Plausible Counterfactual and Semi-Factual Explanations for
  Deep Learning
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep LearningAAAI Conference on Artificial Intelligence (AAAI), 2020
Eoin M. Kenny
Mark T. Keane
192
113
0
10 Sep 2020
Beneficial and Harmful Explanatory Machine Learning
Beneficial and Harmful Explanatory Machine LearningMachine-mediated learning (ML), 2020
L. Ai
Stephen Muggleton
Céline Hocquette
Mark Gromowski
Ute Schmid
149
33
0
09 Sep 2020
Previous
123...212223...252627
Next