ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,336 papers shown
Understanding a Robot's Guiding Ethical Principles via Automatically
  Generated Explanations
Understanding a Robot's Guiding Ethical Principles via Automatically Generated Explanations
Benjamin Krarup
F. Lindner
Senka Krivic
D. Long
96
3
0
20 Jun 2022
Stop ordering machine learning algorithms by their explainability! A
  user-centered investigation of performance and explainability
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainabilityInternational Journal of Information Management (IJIM), 2022
L. Herm
Kai Heinrich
Jonas Wanner
Christian Janiesch
174
109
0
20 Jun 2022
Eliminating The Impossible, Whatever Remains Must Be True
Eliminating The Impossible, Whatever Remains Must Be TrueAAAI Conference on Artificial Intelligence (AAAI), 2022
Jinqiang Yu
Alexey Ignatiev
Peter Stuckey
Nina Narodytska
Sasha Rubin
361
29
0
20 Jun 2022
A Human-Centric Method for Generating Causal Explanations in Natural
  Language for Autonomous Vehicle Motion Planning
A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning
Bálint Gyevnár
Massimiliano Tamborski
Cheng-Hsien Wang
Christopher G. Lucas
Shay B. Cohen
Stefano V. Albrecht
169
12
0
17 Jun 2022
Rectifying Mono-Label Boolean Classifiers
Rectifying Mono-Label Boolean Classifiers
S. Coste-Marquis
Pierre Marquis
220
0
0
17 Jun 2022
The Manifold Hypothesis for Gradient-Based Explanations
The Manifold Hypothesis for Gradient-Based Explanations
Sebastian Bordt
Uddeshya Upadhyay
Zeynep Akata
U. V. Luxburg
FAttAAML
249
17
0
15 Jun 2022
Combining Counterfactuals With Shapley Values To Explain Image Models
Combining Counterfactuals With Shapley Values To Explain Image Models
Aditya Lahiri
Kamran Alipour
Ehsan Adeli
Babak Salimi
FAtt
177
8
0
14 Jun 2022
Explainable AI for High Energy Physics
Explainable AI for High Energy Physics
Mark S. Neubauer
Avik Roy
162
13
0
14 Jun 2022
A Methodology and Software Architecture to Support
  Explainability-by-Design
A Methodology and Software Architecture to Support Explainability-by-Design
T. D. Huynh
Niko Tsakalakis
Ayah Helal
Sophie Stalla-Bourdillon
Luc Moreau
165
5
0
13 Jun 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
260
17
0
13 Jun 2022
Explaining Image Classifiers Using Contrastive Counterfactuals in
  Generative Latent Spaces
Explaining Image Classifiers Using Contrastive Counterfactuals in Generative Latent Spaces
Kamran Alipour
Aditya Lahiri
Ehsan Adeli
Babak Salimi
M. Pazzani
CML
160
7
0
10 Jun 2022
Diffeomorphic Counterfactuals with Generative Models
Diffeomorphic Counterfactuals with Generative ModelsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Ann-Kathrin Dombrowski
Jan E. Gerken
Klaus-Robert Muller
Pan Kessel
DiffMBDL
321
22
0
10 Jun 2022
Think About the Stakeholders First! Towards an Algorithmic Transparency
  Playbook for Regulatory Compliance
Think About the Stakeholders First! Towards an Algorithmic Transparency Playbook for Regulatory ComplianceData & Policy (DP), 2022
Andrew Bell
O. Nov
Julia Stoyanovich
164
30
0
10 Jun 2022
Ask to Know More: Generating Counterfactual Explanations for Fake Claims
Ask to Know More: Generating Counterfactual Explanations for Fake ClaimsKnowledge Discovery and Data Mining (KDD), 2022
Shih-Chieh Dai
Yi-Li Hsu
Aiping Xiong
Lun-Wei Ku
OffRL
160
29
0
10 Jun 2022
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI
  Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Mohamed Karim Belaid
Eyke Hüllermeier
Maximilian Rabus
Ralf Krestel
ELM
175
0
0
08 Jun 2022
EiX-GNN : Concept-level eigencentrality explainer for graph neural
  networks
EiX-GNN : Concept-level eigencentrality explainer for graph neural networks
Adrien Raison
Pascal Bourdon
David Helbert
179
2
0
07 Jun 2022
Explainability in Mechanism Design: Recent Advances and the Road Ahead
Explainability in Mechanism Design: Recent Advances and the Road AheadEuropean Workshop on Multi-Agent Systems (EUMAS), 2022
Sharadhi Alape Suryanarayana
David Sarne
Sarit Kraus
179
7
0
07 Jun 2022
Improving Model Understanding and Trust with Counterfactual Explanations
  of Model Confidence
Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence
Thao Le
Tim Miller
Ronal Singh
L. Sonenberg
126
11
0
06 Jun 2022
Can Requirements Engineering Support Explainable Artificial
  Intelligence? Towards a User-Centric Approach for Explainability Requirements
Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements
Umm-e-Habiba
Justus Bogner
Stefan Wagner
XAI
140
18
0
03 Jun 2022
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
  Towards Causal Explanations of Probabilistic Forecasts
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency Towards Causal Explanations of Probabilistic Forecasts
Chirag Raman
Hayley Hung
Marco Loog
179
3
0
01 Jun 2022
Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for
  Importance of Variables
Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for Importance of Variables
Manjinder Singh
Alexander E. I. Brownlee
David Cairns
207
13
0
31 May 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based ModelsNeural Information Processing Systems (NeurIPS), 2022
Emanuele Marconato
Baptiste Caramiaux
Stefano Teso
413
75
0
31 May 2022
Causal Explanations for Sequential Decision Making Under Uncertainty
Causal Explanations for Sequential Decision Making Under UncertaintyAdaptive Agents and Multi-Agent Systems (AAMAS), 2022
Samer B. Nashed
Saaduddin Mahmud
C. V. Goldman
S. Zilberstein
CML
248
4
0
30 May 2022
Justifying Social-Choice Mechanism Outcome for Improving Participant
  Satisfaction
Justifying Social-Choice Mechanism Outcome for Improving Participant SatisfactionAdaptive Agents and Multi-Agent Systems (AAMAS), 2022
Sharadhi Alape Suryanarayana
David Sarne
Bar-Ilan
228
8
0
24 May 2022
Explaining Causal Models with Argumentation: the Case of Bi-variate
  Reinforcement
Explaining Causal Models with Argumentation: the Case of Bi-variate ReinforcementInternational Conference on Principles of Knowledge Representation and Reasoning (KR), 2022
Antonio Rago
P. Baroni
Francesca Toni
CML
165
5
0
23 May 2022
Fairness in Recommender Systems: Research Landscape and Future
  Directions
Fairness in Recommender Systems: Research Landscape and Future Directions
Yashar Deldjoo
Dietmar Jannach
Alejandro Bellogín
Alessandro Difonzo
Dario Zanzonelli
OffRLFaML
392
135
0
23 May 2022
Human and technological infrastructures of fact-checking
Human and technological infrastructures of fact-checking
Prerna Juneja
Tanushree Mitra
HILMHAI
209
58
0
22 May 2022
Explanatory machine learning for sequential human teaching
Explanatory machine learning for sequential human teachingMachine-mediated learning (ML), 2022
L. Ai
Johannes Langer
Stephen Muggleton
Ute Schmid
181
6
0
20 May 2022
The Fairness of Credit Scoring Models
The Fairness of Credit Scoring ModelsSocial Science Research Network (SSRN), 2021
Christophe Hurlin
C. Pérignon
Sébastien Saurin
FaML
179
38
0
20 May 2022
Survey on Fair Reinforcement Learning: Theory and Practice
Survey on Fair Reinforcement Learning: Theory and Practice
Pratik Gajane
A. Saxena
M. Tavakol
George Fletcher
Mykola Pechenizkiy
FaMLOffRL
284
19
0
20 May 2022
On Tackling Explanation Redundancy in Decision Trees
On Tackling Explanation Redundancy in Decision TreesJournal of Artificial Intelligence Research (JAIR), 2022
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
302
75
0
20 May 2022
Provably Precise, Succinct and Efficient Explanations for Decision Trees
Provably Precise, Succinct and Efficient Explanations for Decision Trees
Yacine Izza
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
217
9
0
19 May 2022
Generating Explanations from Deep Reinforcement Learning Using Episodic
  Memory
Generating Explanations from Deep Reinforcement Learning Using Episodic Memory
Sam Blakeman
D. Mareschal
189
3
0
18 May 2022
A Psychological Theory of Explainability
A Psychological Theory of ExplainabilityInternational Conference on Machine Learning (ICML), 2022
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAIFAtt
197
19
0
17 May 2022
Is explainable AI a race against model complexity?
Is explainable AI a race against model complexity?
Advait Sarkar
LRM
251
17
0
17 May 2022
Sparse Visual Counterfactual Explanations in Image Space
Sparse Visual Counterfactual Explanations in Image SpaceGerman Conference on Pattern Recognition (GCPR), 2022
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDLCML
352
32
0
16 May 2022
Can counterfactual explanations of AI systems' predictions skew lay
  users' causal intuitions about the world? If so, can we correct for that?
Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?Patterns (Patterns), 2022
Marko Tešić
U. Hahn
CML
127
7
0
12 May 2022
"There Is Not Enough Information": On the Effects of Explanations on
  Perceptions of Informational Fairness and Trustworthiness in Automated
  Decision-Making
"There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-MakingConference on Fairness, Accountability and Transparency (FAccT), 2022
Jakob Schoeffer
Niklas Kuehl
Yvette Machowski
FaML
193
68
0
11 May 2022
Keep Your Friends Close and Your Counterfactuals Closer: Improved
  Learning From Closest Rather Than Plausible Counterfactual Explanations in an
  Abstract Setting
Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract SettingConference on Fairness, Accountability and Transparency (FAccT), 2022
Ulrike Kuhl
André Artelt
Barbara Hammer
188
28
0
11 May 2022
The Conflict Between Explainable and Accountable Decision-Making
  Algorithms
The Conflict Between Explainable and Accountable Decision-Making AlgorithmsConference on Fairness, Accountability and Transparency (FAccT), 2022
Gabriel Lima
Nina Grgić-Hlavca
Jin Keun Jeong
M. Cha
129
45
0
11 May 2022
Sensible AI: Re-imagining Interpretability and Explainability using
  Sensemaking Theory
Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking TheoryConference on Fairness, Accountability and Transparency (FAccT), 2022
Harmanpreet Kaur
Eytan Adar
Eric Gilbert
Cliff Lampe
127
69
0
10 May 2022
Lifelong Personal Context Recognition
Lifelong Personal Context Recognition
A. Bontempelli
Marcelo D. Rodas-Brítez
Xiaoyue Li
Haonan Zhao
L. Erculiani
Stefano Teso
Baptiste Caramiaux
Fausto Giunchiglia
185
7
0
10 May 2022
Let's Go to the Alien Zoo: Introducing an Experimental Framework to
  Study Usability of Counterfactual Explanations for Machine Learning
Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning
Ulrike Kuhl
André Artelt
Barbara Hammer
207
22
0
06 May 2022
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study
  in Hate Speech Detection
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech DetectionNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
FAtt
189
30
0
06 May 2022
Tell Me Something That Will Help Me Trust You: A Survey of Trust
  Calibration in Human-Agent Interaction
Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction
G. Cancro
Shimei Pan
James R. Foulds
128
2
0
06 May 2022
One-way Explainability Isn't The Message
One-way Explainability Isn't The Message
Harshvardhan Mestha
Michael Bain
Enrico W. Coiera
134
2
0
05 May 2022
Scientific Explanation and Natural Language: A Unified
  Epistemological-Linguistic Perspective for Explainable AI
Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI
Marco Valentino
André Freitas
XAI
178
3
0
03 May 2022
SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated
  Counterfactuals
SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated CounterfactualsAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2022
Zijian Zhang
Vinay Setty
Avishek Anand
154
7
0
03 May 2022
Visual Knowledge Discovery with Artificial Intelligence: Challenges and
  Future Directions
Visual Knowledge Discovery with Artificial Intelligence: Challenges and Future Directions
Boris Kovalerchuk
Ruazvan Andonie
Nuno Datia
Kawa Nazemi
Ebad Banissi
170
13
0
03 May 2022
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT
  Security
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT SecurityIEEE Internet of Things Journal (IEEE IoT J.), 2022
Maede Zolanvari
Zebo Yang
K. Khan
Rajkumar Jain
N. Meskin
119
108
0
02 May 2022
Previous
123...141516...252627
Next