ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Unpacking Human-AI Interaction in Safety-Critical Industries: A
  Systematic Literature Review
Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
T. A. Bach
Jenny K. Kristiansen
Aleksandar Babic
Alon Jacovi
31
8
0
05 Oct 2023
Learning Interpretable Deep Disentangled Neural Networks for
  Hyperspectral Unmixing
Learning Interpretable Deep Disentangled Neural Networks for Hyperspectral Unmixing
R. Borsoi
Deniz Erdoğmuş
Tales Imbiriba
24
4
0
03 Oct 2023
Designing User-Centric Behavioral Interventions to Prevent Dysglycemia
  with Novel Counterfactual Explanations
Designing User-Centric Behavioral Interventions to Prevent Dysglycemia with Novel Counterfactual Explanations
Asiful Arefeen
Hassan Ghasemzadeh
41
3
0
02 Oct 2023
Refutation of Shapley Values for XAI -- Additional Evidence
Refutation of Shapley Values for XAI -- Additional Evidence
Xuanxiang Huang
Sasha Rubin
AAML
37
4
0
30 Sep 2023
Dynamic Interpretability for Model Comparison via Decision Rules
Dynamic Interpretability for Model Comparison via Decision Rules
Adam Rida
Marie-Jeanne Lesot
Junsheng Wang
Liyan Zhang
16
0
0
29 Sep 2023
Tell Me a Story! Narrative-Driven XAI with Large Language Models
Tell Me a Story! Narrative-Driven XAI with Large Language Models
David Martens
James Hinns
Camille Dams
Mark Vergouwen
Theodoros Evgeniou
25
4
0
29 Sep 2023
Multiple Different Black Box Explanations for Image Classifiers
Multiple Different Black Box Explanations for Image Classifiers
Hana Chockler
D. A. Kelly
Daniel Kroening
FAtt
24
0
0
25 Sep 2023
An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of
  Service-oriented Systems
An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems
Andreas Metzger
Jon Bartel
Jan Laufer
29
2
0
25 Sep 2023
May I Ask a Follow-up Question? Understanding the Benefits of
  Conversations in Neural Network Explainability
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
Xiaoyu Yang
Boyang Albert Li
30
3
0
25 Sep 2023
A Comprehensive Review on Financial Explainable AI
A Comprehensive Review on Financial Explainable AI
Wei Jie Yeo
Wihan van der Heever
Rui Mao
Min Zhang
Ranjan Satapathy
G. Mengaldo
XAI
AI4TS
32
15
0
21 Sep 2023
ProtoExplorer: Interpretable Forensic Analysis of Deepfake Videos using
  Prototype Exploration and Refinement
ProtoExplorer: Interpretable Forensic Analysis of Deepfake Videos using Prototype Exploration and Refinement
M. D. L. D. Bouter
J. Pardo
Z. Geradts
M. Worring
26
10
0
20 Sep 2023
Explaining Agent Behavior with Large Language Models
Explaining Agent Behavior with Large Language Models
Xijia Zhang
Yue (Sophie) Guo
Simon Stepputtis
Katia Sycara
Joseph Campbell
LM&Ro
LLMAG
38
6
0
19 Sep 2023
Evaluation of Human-Understandability of Global Model Explanations using
  Decision Tree
Evaluation of Human-Understandability of Global Model Explanations using Decision Tree
Adarsa Sivaprasad
Ehud Reiter
N. Tintarev
Nir Oren
FAtt
26
4
0
18 Sep 2023
Interpretability is not Explainability: New Quantitative XAI Approach
  with a focus on Recommender Systems in Education
Interpretability is not Explainability: New Quantitative XAI Approach with a focus on Recommender Systems in Education
Riccardo Porcedda
XAI
36
0
0
18 Sep 2023
Quantifying Credit Portfolio sensitivity to asset correlations with
  interpretable generative neural networks
Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks
S. Caprioli
Emanuele Cagliero
Riccardo Crupi
GAN
13
2
0
15 Sep 2023
Interpretability is in the Mind of the Beholder: A Causal Framework for
  Human-interpretable Representation Learning
Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Emanuele Marconato
Andrea Passerini
Stefano Teso
41
13
0
14 Sep 2023
Causal Entropy and Information Gain for Measuring Causal Control
Causal Entropy and Information Gain for Measuring Causal Control
F. N. F. Q. Simoes
Mehdi Dastani
T. V. Ommen
CML
19
4
0
14 Sep 2023
On the Injunction of XAIxArt
On the Injunction of XAIxArt
C. Arora
Debarun Sarkar
10
0
0
12 Sep 2023
Viewing the process of generating counterfactuals as a source of
  knowledge: a new approach for explaining classifiers
Viewing the process of generating counterfactuals as a source of knowledge: a new approach for explaining classifiers
Vincent Lemaire
Nathan Le Boudec
Victor Guyomard
Franccoise Fessant
CML
33
0
0
08 Sep 2023
Counterfactual Explanations via Locally-guided Sequential Algorithmic
  Recourse
Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Edward A. Small
Jeffrey N Clark
Christopher J. McWilliams
Kacper Sokol
Jeffrey Chan
Flora D. Salim
Raúl Santos-Rodríguez
20
1
0
08 Sep 2023
FIND: A Function Description Benchmark for Evaluating Interpretability
  Methods
FIND: A Function Description Benchmark for Evaluating Interpretability Methods
Sarah Schwettmann
Tamar Rott Shaham
Joanna Materzyñska
Neil Chowdhury
Shuang Li
Jacob Andreas
David Bau
Antonio Torralba
18
20
0
07 Sep 2023
A Refutation of Shapley Values for Explainability
A Refutation of Shapley Values for Explainability
Xuanxiang Huang
Sasha Rubin
FAtt
29
3
0
06 Sep 2023
Natural Example-Based Explainability: a Survey
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
33
12
0
05 Sep 2023
Outlining the design space of eXplainable swarm (xSwarm): experts
  perspective
Outlining the design space of eXplainable swarm (xSwarm): experts perspective
Mohammad Naiseh
Mohammad D. Soorati
Sarvapali Ramchurn
13
3
0
03 Sep 2023
Declarative Reasoning on Explanations Using Constraint Logic Programming
Declarative Reasoning on Explanations Using Constraint Logic Programming
Laura State
Salvatore Ruggieri
Franco Turini
LRM
23
1
0
01 Sep 2023
Interpretable Medical Imagery Diagnosis with Self-Attentive
  Transformers: A Review of Explainable AI for Health Care
Interpretable Medical Imagery Diagnosis with Self-Attentive Transformers: A Review of Explainable AI for Health Care
Tin Lai
MedIm
42
4
0
01 Sep 2023
Learning to Taste: A Multimodal Wine Dataset
Learning to Taste: A Multimodal Wine Dataset
Thoranna Bender
Simon Moe Sorensen
A. Kashani
K. E. Hjorleifsson
Grethe Hyldig
Søren Hauberg
Serge Belongie
Frederik Warburg
CoGe
35
2
0
31 Aug 2023
Concentrating on the Impact: Consequence-based Explanations in
  Recommender Systems
Concentrating on the Impact: Consequence-based Explanations in Recommender Systems
Sebastian Lubos
Thi Ngoc Trang Tran
Seda Polat-Erdeniz
Merfat El Mansi
Alexander Felfernig
Manfred Wundara
G. Leitner
HAI
22
3
0
31 Aug 2023
Explainable Answer-set Programming
Explainable Answer-set Programming
Tobias Geibinger
LRM
21
1
0
30 Aug 2023
RecRec: Algorithmic Recourse for Recommender Systems
RecRec: Algorithmic Recourse for Recommender Systems
Sahil Verma
Ashudeep Singh
Varich Boonsanong
John P. Dickerson
Chirag Shah
38
1
0
28 Aug 2023
Explaining with Attribute-based and Relational Near Misses: An
  Interpretable Approach to Distinguishing Facial Expressions of Pain and
  Disgust
Explaining with Attribute-based and Relational Near Misses: An Interpretable Approach to Distinguishing Facial Expressions of Pain and Disgust
Bettina Finzel
Simon Kuhn
David E. Tafler
Ute Schmid
FAtt
39
1
0
27 Aug 2023
Situated Natural Language Explanations
Situated Natural Language Explanations
Zining Zhu
Hao Jiang
Jingfeng Yang
Sreyashi Nag
Chao Zhang
Jie Huang
Yifan Gao
Frank Rudzicz
Bing Yin
LRM
49
1
0
27 Aug 2023
Learning to Intervene on Concept Bottlenecks
Learning to Intervene on Concept Bottlenecks
David Steinmann
Wolfgang Stammer
Felix Friedrich
Kristian Kersting
27
20
0
25 Aug 2023
Reframing the Brain Age Prediction Problem to a More Interpretable and
  Quantitative Approach
Reframing the Brain Age Prediction Problem to a More Interpretable and Quantitative Approach
Neha Gianchandani
Mahsa Dibaji
M. Bento
Ethan MacDonald
Roberto Souza
FAtt
MedIm
42
0
0
23 Aug 2023
User-centric AIGC products: Explainable Artificial Intelligence and AIGC
  products
User-centric AIGC products: Explainable Artificial Intelligence and AIGC products
Hanjie Yu
Yan Dong
Qiong Wu
23
4
0
19 Aug 2023
Explainable AI for clinical risk prediction: a survey of concepts,
  methods, and modalities
Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities
Munib Mesinovic
Peter Watkinson
Ting Zhu
FaML
24
3
0
16 Aug 2023
Explaining Black-Box Models through Counterfactuals
Explaining Black-Box Models through Counterfactuals
Patrick Altmeyer
A. V. Deursen
Cynthia C. S. Liem
CML
LRM
42
2
0
14 Aug 2023
DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System
DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System
Mojtaba Yeganejou
Kimia Honari
Ryan Kluzinski
S. Dick
M. Lipsett
James Miller
FedML
AI4CE
42
0
0
11 Aug 2023
Contrastive Explanations of Centralized Multi-agent Optimization
  Solutions
Contrastive Explanations of Centralized Multi-agent Optimization Solutions
Parisa Zehtabi
Alberto Pozanco
Ayala Bloch
Daniel Borrajo
Sarit Kraus
19
4
0
11 Aug 2023
Explainable AI applications in the Medical Domain: a systematic review
Explainable AI applications in the Medical Domain: a systematic review
Nicoletta Prentzas
A. Kakas
Constantinos S. Pattichis
26
11
0
10 Aug 2023
Adding Why to What? Analyses of an Everyday Explanation
Adding Why to What? Analyses of an Everyday Explanation
Lutz Terfloth
M. Schaffer
H. M. Buhl
Carsten Schulte
19
1
0
08 Aug 2023
Towards a Comprehensive Human-Centred Evaluation Framework for
  Explainable AI
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
Ivania Donoso-Guzmán
Jeroen Ooge
Denis Parra
K. Verbert
56
6
0
31 Jul 2023
Comprehensive Algorithm Portfolio Evaluation using Item Response Theory
Comprehensive Algorithm Portfolio Evaluation using Item Response Theory
Sevvandi Kandanaarachchi
K. Smith‐Miles
16
4
0
29 Jul 2023
Revisiting the Performance-Explainability Trade-Off in Explainable
  Artificial Intelligence (XAI)
Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI)
Barnaby Crook
Maximilian Schluter
Timo Speith
24
16
0
26 Jul 2023
EnTri: Ensemble Learning with Tri-level Representations for Explainable
  Scene Recognition
EnTri: Ensemble Learning with Tri-level Representations for Explainable Scene Recognition
Amirhossein Aminimehr
Amir Molaei
Min Zhang
33
1
0
23 Jul 2023
Providing personalized Explanations: a Conversational Approach
Providing personalized Explanations: a Conversational Approach
Jieting Luo
T. Studer
Mehdi Dastani
21
2
0
21 Jul 2023
Modifications of the Miller definition of contrastive (counterfactual)
  explanations
Modifications of the Miller definition of contrastive (counterfactual) explanations
Kevin McAreavey
Weiru Liu
19
1
0
20 Jul 2023
Exploring Perspectives on the Impact of Artificial Intelligence on the
  Creativity of Knowledge Work: Beyond Mechanised Plagiarism and Stochastic
  Parrots
Exploring Perspectives on the Impact of Artificial Intelligence on the Creativity of Knowledge Work: Beyond Mechanised Plagiarism and Stochastic Parrots
Advait Sarkar
32
32
0
20 Jul 2023
NaMemo2: Facilitating Teacher-Student Interaction with Theory-Based
  Design and Student Autonomy Consideration
NaMemo2: Facilitating Teacher-Student Interaction with Theory-Based Design and Student Autonomy Consideration
Guang-Xiu Jiang
Jiahui Zhu
Yun-fu Li
Pengcheng An
Yunlong Wang
28
3
0
17 Jul 2023
SHAMSUL: Systematic Holistic Analysis to investigate Medical
  Significance Utilizing Local interpretability methods in deep learning for
  chest radiography pathology prediction
SHAMSUL: Systematic Holistic Analysis to investigate Medical Significance Utilizing Local interpretability methods in deep learning for chest radiography pathology prediction
Mahbub Ul Alam
Jaakko Hollmén
Jón R. Baldvinsson
R. Rahmani
FAtt
36
1
0
16 Jul 2023
Previous
123...678...232425
Next