ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,336 papers shown
Rational Shapley Values
Rational Shapley ValuesConference on Fairness, Accountability and Transparency (FAccT), 2021
David S. Watson
127
27
0
18 Jun 2021
It's FLAN time! Summing feature-wise latent representations for
  interpretability
It's FLAN time! Summing feature-wise latent representations for interpretability
An-phi Nguyen
María Rodríguez Martínez
FAtt
202
0
0
18 Jun 2021
Interpretable Machine Learning Classifiers for Brain Tumour Survival
  Prediction
Interpretable Machine Learning Classifiers for Brain Tumour Survival Prediction
C. Charlton
M. Poon
P. Brennan
Jacques D. Fleuriot
97
0
0
17 Jun 2021
Predictive Modeling of Hospital Readmission: Challenges and Solutions
Predictive Modeling of Hospital Readmission: Challenges and Solutions
Shuwen Wang
Xingquan Zhu
OOD
137
34
0
16 Jun 2021
Generating Contrastive Explanations for Inductive Logic Programming
  Based on a Near Miss Approach
Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach
Johannes Rabold
M. Siebers
Ute Schmid
92
19
0
15 Jun 2021
Counterfactual Explanations as Interventions in Latent Space
Counterfactual Explanations as Interventions in Latent SpaceData mining and knowledge discovery (DMKD), 2021
Riccardo Crupi
Alessandro Castelnovo
D. Regoli
Beatriz San Miguel González
CML
169
27
0
14 Jun 2021
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Prompting Contrastive Explanations for Commonsense Reasoning TasksFindings (Findings), 2021
Bhargavi Paranjape
Julian Michael
Marjan Ghazvininejad
Luke Zettlemoyer
Hannaneh Hajishirzi
ReLMLRM
184
74
0
12 Jun 2021
Synthesising Reinforcement Learning Policies through Set-Valued
  Inductive Rule Learning
Synthesising Reinforcement Learning Policies through Set-Valued Inductive Rule LearningInternational Workshop on Trustworthy AI - Integrating Learning, Optimization and Reasoning (TAILOR), 2021
Youri Coppens
Denis Steckelmacher
Catholijn M. Jonker
A. Nowé
93
4
0
10 Jun 2021
On the overlooked issue of defining explanation objectives for
  local-surrogate explainers
On the overlooked issue of defining explanation objectives for local-surrogate explainers
Rafael Poyiadzi
X. Renard
Thibault Laugel
Raúl Santos-Rodríguez
Marcin Detyniecki
113
6
0
10 Jun 2021
Explainable AI, but explainable to whom?
Explainable AI, but explainable to whom?
Julie Gerlings
Millie Søndergaard Jensen
Arisa Shollo
182
50
0
10 Jun 2021
Exploiting auto-encoders and segmentation methods for middle-level
  explanations of image classification systems
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systemsKnowledge-Based Systems (KBS), 2021
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
310
19
0
09 Jun 2021
Amortized Generation of Sequential Algorithmic Recourses for Black-box
  Models
Amortized Generation of Sequential Algorithmic Recourses for Black-box ModelsAAAI Conference on Artificial Intelligence (AAAI), 2021
Sahil Verma
Keegan E. Hines
John P. Dickerson
300
26
0
07 Jun 2021
Interactive Label Cleaning with Example-based Explanations
Interactive Label Cleaning with Example-based ExplanationsNeural Information Processing Systems (NeurIPS), 2021
Stefano Teso
A. Bontempelli
Fausto Giunchiglia
Baptiste Caramiaux
302
55
0
07 Jun 2021
Dissecting Generation Modes for Abstractive Summarization Models via
  Ablation and Attribution
Dissecting Generation Modes for Abstractive Summarization Models via Ablation and AttributionAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Jiacheng Xu
Greg Durrett
147
18
0
03 Jun 2021
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
G. Cabour
A. Morales
É. Ledoux
S. Bassetto
122
5
0
02 Jun 2021
On Efficiently Explaining Graph-Based Classifiers
On Efficiently Explaining Graph-Based ClassifiersInternational Conference on Principles of Knowledge Representation and Reasoning (KR), 2021
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
240
46
0
02 Jun 2021
Is Sparse Attention more Interpretable?
Is Sparse Attention more Interpretable?Annual Meeting of the Association for Computational Linguistics (ACL), 2021
Clara Meister
Stefan Lazov
Isabelle Augenstein
Robert Bamler
MILM
186
49
0
02 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance ExplanationsNeural Information Processing Systems (NeurIPS), 2021
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODDLRMFAtt
341
102
0
01 Jun 2021
Efficient Explanations With Relevant Sets
Efficient Explanations With Relevant Sets
Yacine Izza
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
137
17
0
01 Jun 2021
Explanations for Monotonic Classifiers
Explanations for Monotonic ClassifiersInternational Conference on Machine Learning (ICML), 2021
Sasha Rubin
Thomas Gerspacher
M. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
290
52
0
01 Jun 2021
A unified logical framework for explanations in classifier systems
A unified logical framework for explanations in classifier systemsJournal of Logic and Computation (J. Log. Comput.), 2021
Xinghan Liu
E. Lorini
440
17
0
30 May 2021
Do not explain without context: addressing the blind spot of model
  explanations
Do not explain without context: addressing the blind spot of model explanations
Katarzyna Wo'znica
Katarzyna Pkekala
Hubert Baniecki
Wojciech Kretowicz
El.zbieta Sienkiewicz
P. Biecek
124
1
0
28 May 2021
Fooling Partial Dependence via Data Poisoning
Fooling Partial Dependence via Data Poisoning
Hubert Baniecki
Wojciech Kretowicz
P. Biecek
AAML
296
28
0
26 May 2021
Effects of interactivity and presentation on review-based explanations
  for recommendations
Effects of interactivity and presentation on review-based explanations for recommendationsIFIP TC13 International Conference on Human-Computer Interaction (INTERACT), 2021
Diana C. Hernandez-Bocanegra
J. Ziegler
115
14
0
25 May 2021
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
Efficiently Explaining CSPs with Unsatisfiable Subset OptimizationInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Emilio Gamba
B. Bogaerts
Tias Guns
LRM
329
10
0
25 May 2021
Argumentative XAI: A Survey
Argumentative XAI: A SurveyInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Kristijonas vCyras
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
176
166
0
24 May 2021
On Explaining Random Forests with SAT
On Explaining Random Forests with SATInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Yacine Izza
Sasha Rubin
FAtt
231
87
0
21 May 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
277
31
0
21 May 2021
Data-driven discovery of interpretable causal relations for deep
  learning material laws with uncertainty propagation
Data-driven discovery of interpretable causal relations for deep learning material laws with uncertainty propagationGranular Matter (GM), 2021
Xiao Sun
B. Bahmani
Nikolaos N. Vlassis
WaiChing Sun
Yanxun Xu
CMLAI4CE
194
32
0
20 May 2021
Explainable Activity Recognition for Smart Home Systems
Explainable Activity Recognition for Smart Home Systems
Devleena Das
Yasutaka Nishimura
R. Vivek
Naoto Takeda
Sean T. Fish
Thomas Ploetz
Sonia Chernova
208
60
0
20 May 2021
AI and Ethics -- Operationalising Responsible AI
AI and Ethics -- Operationalising Responsible AI
Liming Zhu
Xiwei Xu
Qinghua Lu
Guido Governatori
Jon Whittle
186
46
0
19 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural NetsIEEE Access (IEEE Access), 2021
Gargi Joshi
Rahee Walambe
K. Kotecha
373
171
0
17 May 2021
Designer-User Communication for XAI: An epistemological approach to
  discuss XAI design
Designer-User Communication for XAI: An epistemological approach to discuss XAI design
J. Ferreira
Mateus de Souza Monteiro
75
7
0
17 May 2021
Abstraction, Validation, and Generalization for Explainable Artificial
  Intelligence
Abstraction, Validation, and Generalization for Explainable Artificial IntelligenceApplied AI Letters (AA), 2021
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
167
7
0
16 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and ConceptsData mining and knowledge discovery (DMKD), 2021
Gesina Schwalbe
Bettina Finzel
XAI
437
266
0
15 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural
  Networks
Cause and Effect: Hierarchical Concept-based Explanation of Neural NetworksIEEE International Conference on Systems, Man and Cybernetics (SMC), 2021
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
192
11
0
14 May 2021
Counterfactual Interventions Reveal the Causal Effect of Relative Clause
  Representations on Agreement Prediction
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement PredictionConference on Computational Natural Language Learning (CoNLL), 2021
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
281
67
0
14 May 2021
SAT-Based Rigorous Explanations for Decision Lists
SAT-Based Rigorous Explanations for Decision ListsInternational Conference on Theory and Applications of Satisfiability Testing (SAT), 2021
Alexey Ignatiev
Sasha Rubin
XAI
206
51
0
14 May 2021
Discovering the Rationale of Decisions: Experiments on Aligning Learning
  and Reasoning
Discovering the Rationale of Decisions: Experiments on Aligning Learning and Reasoning
Cor Steging
S. Renooij
Bart Verheij
68
21
0
14 May 2021
XAI Handbook: Towards a Unified Framework for Explainable AI
XAI Handbook: Towards a Unified Framework for Explainable AI
Sebastián M. Palacio
Adriano Lucieri
Mohsin Munir
Jörn Hees
Sheraz Ahmed
Andreas Dengel
133
40
0
14 May 2021
Sufficient reasons for classifier decisions in the presence of
  constraints
Sufficient reasons for classifier decisions in the presence of constraints
Niku Gorji
S. Rubin
166
3
0
12 May 2021
Intelligent interactive technologies for mental health and well-being
Intelligent interactive technologies for mental health and well-beingArtificial Intelligence (AI), 2021
M. Jovanovic
Aleksandar Jevremovic
M. Pejović-Milovančević
151
3
0
11 May 2021
Explainable Autonomous Robots: A Survey and Perspective
Explainable Autonomous Robots: A Survey and Perspective
Tatsuya Sakai
Takayuki Nagai
202
81
0
06 May 2021
Improving the Faithfulness of Attention-based Explanations with
  Task-specific Information for Text Classification
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text ClassificationAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
G. Chrysostomou
Nikolaos Aletras
219
44
0
06 May 2021
Do Natural Language Explanations Represent Valid Logical Arguments?
  Verifying Entailment in Explainable NLI Gold Standards
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold StandardsInternational Conference on Computational Semantics (IWCS), 2021
Marco Valentino
Ian Pratt-Hartman
André Freitas
XAILRM
223
13
0
05 May 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A SurveyTransactions of the Association for Computational Linguistics (TACL), 2021
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
416
89
0
30 Apr 2021
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based
  Reasoning Pairings for Explanation and Data Augmentation
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation
Markt. Keane
Eoin M. Kenny
M. Temraz
Derek Greene
Barry Smyth
186
6
0
29 Apr 2021
From Human Explanation to Model Interpretability: A Framework Based on
  Weight of Evidence
From Human Explanation to Model Interpretability: A Framework Based on Weight of EvidenceAAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2021
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
257
33
0
27 Apr 2021
TrustyAI Explainability Toolkit
TrustyAI Explainability Toolkit
Rob Geada
Tommaso Teofili
Rui Vieira
Rebecca Whitworth
Daniele Zonca
245
2
0
26 Apr 2021
Axes for Sociotechnical Inquiry in AI Research
Axes for Sociotechnical Inquiry in AI ResearchIEEE Transactions on Technology and Society (IEEE TTS), 2021
Sarah Dean
T. Gilbert
Nathan Lambert
Tom Zick
146
14
0
26 Apr 2021
Previous
123...181920...252627
Next