ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,336 papers shown
Title
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Red Teaming Deep Neural Networks with Feature Synthesis ToolsNeural Information Processing Systems (NeurIPS), 2023
Stephen Casper
Yuxiao Li
Jiawei Li
Tong Bu
Ke Zhang
K. Hariharan
Dylan Hadfield-Menell
AAML
355
21
0
08 Feb 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
305
9
0
07 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function ProposalACM Transactions on Intelligent Systems and Technology (ACM TIST), 2023
M. Hashemi
Ali Darejeh
Francisco Cruz
333
4
0
07 Feb 2023
Five policy uses of algorithmic transparency and explainability
Five policy uses of algorithmic transparency and explainability
Matthew R. O’Shaughnessy
315
1
0
06 Feb 2023
Hypothesis Testing and Machine Learning: Interpreting Variable Effects
  in Deep Artificial Neural Networks using Cohen's f2
Hypothesis Testing and Machine Learning: Interpreting Variable Effects in Deep Artificial Neural Networks using Cohen's f2Applied Soft Computing (Appl. Soft Comput.), 2023
Wolfgang Messner
CML
183
16
0
02 Feb 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
248
72
0
01 Feb 2023
On the Complexity of Enumerating Prime Implicants from Decision-DNNF
  Circuits
On the Complexity of Enumerating Prime Implicants from Decision-DNNF CircuitsInternational Symposium on Artificial Intelligence and Mathematics (ISAIM), 2022
Alexis de Colnet
Pierre Marquis
128
9
0
30 Jan 2023
Even if Explanations: Prior Work, Desiderata & Benchmarks for
  Semi-Factual XAI
Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAIInternational Joint Conference on Artificial Intelligence (IJCAI), 2023
Saugat Aryal
Markt. Keane
200
27
0
27 Jan 2023
Reflective Artificial Intelligence
Reflective Artificial Intelligence
P. R. Lewis
Stefan Sarkadi
136
29
0
25 Jan 2023
Explainable AI does not provide the explanations end-users are asking
  for
Explainable AI does not provide the explanations end-users are asking for
Savio Rozario
G. Cevora
XAI
166
2
0
25 Jan 2023
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
Yotam Amitai
Guy Avni
Ofra Amir
268
3
0
24 Jan 2023
Explainable Deep Reinforcement Learning: State of the Art and Challenges
Explainable Deep Reinforcement Learning: State of the Art and ChallengesACM Computing Surveys (ACM CSUR), 2022
G. Vouros
XAI
381
122
0
24 Jan 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
331
58
0
23 Jan 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
146
4
0
22 Jan 2023
Rationalization for Explainable NLP: A Survey
Rationalization for Explainable NLP: A Survey
Sai Gurrapu
Ajay Kulkarni
Lifu Huang
Ismini Lourentzou
Laura J. Freeman
Feras A. Batarseh
295
50
0
21 Jan 2023
Exemplars and Counterexemplars Explanations for Image Classifiers,
  Targeting Skin Lesion Labeling
Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion LabelingInternational Symposium on Computers and Communications (ISCC), 2021
C. Metta
Riccardo Guidotti
Yuan Yin
Patrick Gallinari
S. Rinzivillo
MedIm
139
13
0
18 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
546
149
0
18 Jan 2023
Video Surveillance System Incorporating Expert Decision-making Process:
  A Case Study on Detecting Calving Signs in Cattle
Video Surveillance System Incorporating Expert Decision-making Process: A Case Study on Detecting Calving Signs in Cattle
Ryosuke Hyodo
Susumu Saito
Teppei Nakano
Makoto Akabane
Ryoichi Kasuga
Tetsuji Ogawa
67
1
0
10 Jan 2023
Language as a Latent Sequence: deep latent variable models for
  semi-supervised paraphrase generation
Language as a Latent Sequence: deep latent variable models for semi-supervised paraphrase generationAI Open (AO), 2023
Jialin Yu
Alexandra I. Cristea
Anoushka Harit
Zhongtian Sun
O. Aduragba
Lei Shi
Noura Al Moubayed
VLMBDLDRL
265
3
0
05 Jan 2023
PEAK: Explainable Privacy Assistant through Automated Knowledge
  Extraction
PEAK: Explainable Privacy Assistant through Automated Knowledge Extraction
Gonul Ayci
Arzucan Özgür
Murat cSensoy
P. Yolum
240
4
0
05 Jan 2023
Mapping Knowledge Representations to Concepts: A Review and New
  Perspectives
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Lars Holmberg
P. Davidsson
Per Linde
197
2
0
31 Dec 2022
A Theoretical Framework for AI Models Explainability with Application in
  Biomedicine
A Theoretical Framework for AI Models Explainability with Application in BiomedicineIEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 2022
Matteo Rizzo
Alberto Veneri
A. Albarelli
Claudio Lucchese
Marco Nobile
Cristina Conati
XAI
250
12
0
29 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
315
109
0
25 Dec 2022
Explanation Regeneration via Information Bottleneck
Explanation Regeneration via Information BottleneckAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Qintong Li
Zhiyong Wu
Lingpeng Kong
Wei Bi
240
4
0
19 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations DifferArtificial Intelligence (AI), 2022
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
242
21
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh Chawla
190
2
0
15 Dec 2022
Explanations Can Reduce Overreliance on AI Systems During
  Decision-Making
Explanations Can Reduce Overreliance on AI Systems During Decision-Making
Helena Vasconcelos
Matthew Jörke
Madeleine Grunde-McLaughlin
Tobias Gerstenberg
Michael S. Bernstein
Ranjay Krishna
265
243
0
13 Dec 2022
Improving Accuracy Without Losing Interpretability: A ML Approach for
  Time Series Forecasting
Improving Accuracy Without Losing Interpretability: A ML Approach for Time Series Forecasting
Yiqi Sun
Zheng Shi
Jianshen Zhang
Yongzhi Qi
Hao Hu
Zuo-jun Shen
AI4TS
156
1
0
13 Dec 2022
On Computing Probabilistic Abductive Explanations
On Computing Probabilistic Abductive ExplanationsInternational Journal of Approximate Reasoning (IJAR), 2022
Yacine Izza
Xuanxiang Huang
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAttXAI
227
26
0
12 Dec 2022
Towards a Learner-Centered Explainable AI: Lessons from the learning
  sciences
Towards a Learner-Centered Explainable AI: Lessons from the learning sciences
Anna Kawakami
Luke M. Guerdan
Yang Cheng
Anita Sun
Alison Hu
...
Nikos Arechiga
Matthew H. Lee
Scott A. Carter
Haiyi Zhu
Kenneth Holstein
180
10
0
11 Dec 2022
FAIR AI Models in High Energy Physics
FAIR AI Models in High Energy Physics
Javier Mauricio Duarte
Haoyang Li
Avik Roy
Ruike Zhu
Eliu A. Huerta
...
Mark S. Neubauer
Sang Eon Park
M. Quinnan
R. Rusack
Zhizhen Zhao
311
13
0
09 Dec 2022
Criteria for Classifying Forecasting Methods
Criteria for Classifying Forecasting MethodsInternational Journal of Forecasting (IJF), 2020
Tim Januschowski
Jan Gasthaus
Bernie Wang
David Salinas
Valentin Flunkert
Michael Bohlke-Schneider
Laurent Callot
AI4TS
237
201
0
07 Dec 2022
Towards Better User Requirements: How to Involve Human Participants in
  XAI Research
Towards Better User Requirements: How to Involve Human Participants in XAI Research
Thu Nguyen
Jichen Zhu
98
4
0
06 Dec 2022
Relative Sparsity for Medical Decision Problems
Relative Sparsity for Medical Decision ProblemsStatistics in Medicine (Stat Med), 2022
Samuel J. Weisenthal
Sally W. Thurston
Ashkan Ertefaie
200
4
0
29 Nov 2022
Mixture of Decision Trees for Interpretable Machine Learning
Mixture of Decision Trees for Interpretable Machine LearningInternational Conference on Machine Learning and Applications (ICMLA), 2022
Simeon Brüggenjürgen
Nina Schaaf
P. Kerschke
Marco F. Huber
MoE
86
1
0
26 Nov 2022
Interpretability of an Interaction Network for identifying $H
  \rightarrow b\bar{b}$ jets
Interpretability of an Interaction Network for identifying H→bbˉH \rightarrow b\bar{b}H→bbˉ jets
Avik Roy
Mark S. Neubauer
117
3
0
23 Nov 2022
Algorithmic Decision-Making Safeguarded by Human Knowledge
Algorithmic Decision-Making Safeguarded by Human KnowledgeSocial Science Research Network (SSRN), 2022
Yi Xiong
Mingya Hu
Siyuan Li
116
5
0
20 Nov 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
141
8
0
19 Nov 2022
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Stephen Casper
K. Hariharan
Dylan Hadfield-Menell
AAML
386
11
0
18 Nov 2022
Towards Explaining Subjective Ground of Individuals on Social Media
Towards Explaining Subjective Ground of Individuals on Social MediaConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Younghun Lee
Dan Goldwasser
221
2
0
18 Nov 2022
Explainability Via Causal Self-Talk
Explainability Via Causal Self-TalkNeural Information Processing Systems (NeurIPS), 2022
Nicholas A. Roy
Junkyung Kim
Neil C. Rabinowitz
CML
192
8
0
17 Nov 2022
Comparing Explanation Methods for Traditional Machine Learning Models
  Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement
Montgomery Flora
Corey K. Potvin
A. McGovern
Shawn Handler
FAtt
273
25
0
16 Nov 2022
(When) Are Contrastive Explanations of Reinforcement Learning Helpful?
(When) Are Contrastive Explanations of Reinforcement Learning Helpful?
Sanjana Narayanan
Isaac Lage
Finale Doshi-Velez
OffRL
68
1
0
14 Nov 2022
Are Hard Examples also Harder to Explain? A Study with Human and
  Model-Generated Explanations
Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated ExplanationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Swarnadeep Saha
Peter Hase
Nazneen Rajani
Joey Tianyi Zhou
LRM
159
16
0
14 Nov 2022
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Upol Ehsan
Q. V. Liao
Samir Passi
Mark O. Riedl
Hal Daumé
215
34
0
12 Nov 2022
A Survey on Explainable Reinforcement Learning: Concepts, Algorithms, Challenges
A Survey on Explainable Reinforcement Learning: Concepts, Algorithms, Challenges
Yunpeng Qing
Shunyu Liu
Mingli Song
Huiqiong Wang
Weilong Dai
XAI
472
2
0
12 Nov 2022
Social Construction of XAI: Do We Need One Definition to Rule Them All?
Social Construction of XAI: Do We Need One Definition to Rule Them All?
Upol Ehsan
Mark O. Riedl
126
10
0
11 Nov 2022
Behaviour Trees for Creating Conversational Explanation Experiences
Behaviour Trees for Creating Conversational Explanation Experiences
A. Wijekoon
D. Corsar
Nirmalie Wiratunga
120
3
0
11 Nov 2022
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in
  Interpretable Machine Learning
Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning
Danial Dervovic
Nicolas Marchesotti
Freddy Lecue
Daniele Magazzeni
140
0
0
11 Nov 2022
CCPrefix: Counterfactual Contrastive Prefix-Tuning for Many-Class
  Classification
CCPrefix: Counterfactual Contrastive Prefix-Tuning for Many-Class ClassificationConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Yongbin Li
Canran Xu
Guodong Long
Tao Shen
Chongyang Tao
Jing Jiang
248
2
0
11 Nov 2022
Previous
123...111213...252627
Next