ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,336 papers shown
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AIInternational Conference on Human Factors in Computing Systems (CHI), 2024
Bálint Gyevnár
Stephanie Droop
Tadeg Quillien
Shay B. Cohen
Neil R. Bramley
Christopher G. Lucas
Stefano V. Albrecht
288
1
0
11 Mar 2024
WatChat: Explaining perplexing programs by debugging mental models
WatChat: Explaining perplexing programs by debugging mental models
Kartik Chandra
Tzu-Mao Li
Rachit Nigam
Joshua Tenenbaum
Jonathan Ragan-Kelley
LRM
124
7
0
08 Mar 2024
T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers
T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers
Mariano V. Ntrougkas
Nikolaos Gkalelis
Vasileios Mezaris
ViTFAtt
236
9
0
07 Mar 2024
Explaining Genetic Programming Trees using Large Language Models
Explaining Genetic Programming Trees using Large Language Models
Paula Maddigan
Andrew Lensen
Bing Xue
AI4CE
174
10
0
06 Mar 2024
Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found
  Using Counterfactuals As Guides?
Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found Using Counterfactuals As Guides?
Saugat Aryal
Mark T. Keane
225
6
0
01 Mar 2024
Modeling the Quality of Dialogical Explanations
Modeling the Quality of Dialogical Explanations
Milad Alshomary
Felix Lange
Meisam Booshehri
Meghdut Sengupta
Philipp Cimiano
Henning Wachsmuth
180
3
0
01 Mar 2024
Axe the X in XAI: A Plea for Understandable AI
Axe the X in XAI: A Plea for Understandable AI
Andrés Páez
196
2
0
01 Mar 2024
User Characteristics in Explainable AI: The Rabbit Hole of
  Personalization?
User Characteristics in Explainable AI: The Rabbit Hole of Personalization?
Robert Nimmo
Marios Constantinides
Ke Zhou
Daniele Quercia
Simone Stumpf
181
23
0
29 Feb 2024
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale
  Annotations
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations
Stephanie Brandl
Oliver Eberle
Tiago F. R. Ribeiro
Anders Søgaard
Nora Hollenstein
212
3
0
29 Feb 2024
Cultural Bias in Explainable AI Research: A Systematic Analysis
Cultural Bias in Explainable AI Research: A Systematic Analysis
Uwe Peters
Mary Carman
165
34
0
28 Feb 2024
User Decision Guidance with Selective Explanation Presentation from
  Explainable-AI
User Decision Guidance with Selective Explanation Presentation from Explainable-AI
Yosuke Fukuchi
Seiji Yamada
444
4
0
28 Feb 2024
Understanding the Dataset Practitioners Behind Large Language Model
  Development
Understanding the Dataset Practitioners Behind Large Language Model Development
Crystal Qian
Emily Reif
Minsuk Kahng
255
3
0
21 Feb 2024
What is the focus of XAI in UI design? Prioritizing UI design principles
  for enhancing XAI user experience
What is the focus of XAI in UI design? Prioritizing UI design principles for enhancing XAI user experience
Dian Lei
Yao He
Jianyou Zeng
406
3
0
21 Feb 2024
SmartEx: A Framework for Generating User-Centric Explanations in Smart
  Environments
SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments
Mersedeh Sadeghi
Lars Herbold
Max Unterbusch
Andreas Vogelsang
210
11
0
20 Feb 2024
Right on Time: Revising Time Series Models by Constraining their Explanations
Right on Time: Revising Time Series Models by Constraining their Explanations
Maurice Kraus
David Steinmann
Antonia Wüst
Andre Kokozinski
Kristian Kersting
AI4TS
481
6
0
20 Feb 2024
Properties and Challenges of LLM-Generated Explanations
Properties and Challenges of LLM-Generated Explanations
Jenny Kunz
Marco Kuhlmann
243
30
0
16 Feb 2024
Current and future roles of artificial intelligence in retinopathy of
  prematurity
Current and future roles of artificial intelligence in retinopathy of prematurity
Ali Jafarizadeh
Shadi Farabi Maleki
Parnia Pouya
Navid Sobhi
M. Abdollahi
...
Houshyar Asadi
R. Alizadehsani
Ruyan Tan
Sheikh Mohammad Shariful Islam
U. R. Acharya
AI4CE
153
15
0
15 Feb 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
243
3
0
15 Feb 2024
Connecting Algorithmic Fairness to Quality Dimensions in Machine
  Learning in Official Statistics and Survey Production
Connecting Algorithmic Fairness to Quality Dimensions in Machine Learning in Official Statistics and Survey Production
Patrick Oliver Schenk
Christoph Kern
FaML
283
4
0
14 Feb 2024
TELLER: A Trustworthy Framework for Explainable, Generalizable and
  Controllable Fake News Detection
TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection
Hui Liu
Wenya Wang
Haoru Li
Haoliang Li
247
11
0
12 Feb 2024
One-for-many Counterfactual Explanations by Column Generation
One-for-many Counterfactual Explanations by Column Generation
Andrea Lodi
Jasone Ramírez-Ayerbe
LRM
217
4
0
12 Feb 2024
ACTER: Diverse and Actionable Counterfactual Sequences for Explaining
  and Diagnosing RL Policies
ACTER: Diverse and Actionable Counterfactual Sequences for Explaining and Diagnosing RL Policies
Jasmina Gajcin
Ivana Dusparic
CMLOffRL
192
2
0
09 Feb 2024
Scalable Interactive Machine Learning for Future Command and Control
Scalable Interactive Machine Learning for Future Command and Control
Anna Madison
Ellen R. Novoseller
Vinicius G. Goecks
Benjamin T. Files
Nicholas R. Waytowich
Alfred Yu
Vernon J. Lawhern
Steven Thurman
Christopher Kelshaw
Kaleb McDowell
266
6
0
09 Feb 2024
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic
  Review
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Anton Kuznietsov
Bálint Gyevnár
Cheng Wang
Steven Peters
Stefano V. Albrecht
XAI
403
74
0
08 Feb 2024
Advancing Explainable AI Toward Human-Like Intelligence: Forging the
  Path to Artificial Brain
Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain
Yongchen Zhou
Richard Jiang
297
6
0
07 Feb 2024
Explaining Learned Reward Functions with Counterfactual Trajectories
Explaining Learned Reward Functions with Counterfactual Trajectories
Jan Wehner
Frans Oliehoek
Luciano Cavalcante Siebert
178
0
0
07 Feb 2024
Collective Counterfactual Explanations: Balancing Individual Goals and Collective Dynamics
Collective Counterfactual Explanations: Balancing Individual Goals and Collective Dynamics
A. Ehyaei
Ali Shirali
Samira Samadi
OffRLOT
207
2
0
07 Feb 2024
Leveraging Large Language Models for Hybrid Workplace Decision Support
Leveraging Large Language Models for Hybrid Workplace Decision Support
Yujin Kim
Chin-Chia Hsu
198
1
0
06 Feb 2024
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment ApproachNatural Language Processing Journal (JNLP), 2024
M. N. Jahromi
Satya M. Muddamsetty
Asta Sofie Stage Jarlner
Anna Murphy Hogenhaug
Thomas Gammeltoft-Hansen
T. Moeslund
296
8
0
05 Feb 2024
XAI-CF -- Examining the Role of Explainable Artificial Intelligence in
  Cyber Forensics
XAI-CF -- Examining the Role of Explainable Artificial Intelligence in Cyber Forensics
Shahid Alam
Zeynep Altıparmak
279
7
0
04 Feb 2024
What Will My Model Forget? Forecasting Forgotten Examples in Language
  Model Refinement
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
Xisen Jin
Xiang Ren
KELMCLL
334
8
0
02 Feb 2024
EXMOS: Explanatory Model Steering Through Multifaceted Explanations and
  Data Configurations
EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations
Aditya Bhattacharya
Simone Stumpf
Lucija Gosak
Gregor Stiglic
K. Verbert
230
30
0
01 Feb 2024
Can we Constrain Concept Bottleneck Models to Learn Semantically
  Meaningful Input Features?
Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?
Jack Furby
Daniel Cunnington
Dave Braines
Alun D. Preece
213
7
0
01 Feb 2024
Linguistically Communicating Uncertainty in Patient-Facing Risk
  Prediction Models
Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models
Adarsa Sivaprasad
Ehud Reiter
236
1
0
31 Jan 2024
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
433
9
0
26 Jan 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI AuditsConference on Fairness, Accountability and Transparency (FAccT), 2024
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
560
133
0
25 Jan 2024
Design, Development, and Deployment of Context-Adaptive AI Systems for
  Enhanced End-User Adoption
Design, Development, and Deployment of Context-Adaptive AI Systems for Enhanced End-User Adoption
Christine P. Lee
195
9
0
24 Jan 2024
Information That Matters: Exploring Information Needs of People Affected
  by Algorithmic Decisions
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
225
4
0
24 Jan 2024
Visibility into AI Agents
Visibility into AI AgentsConference on Fairness, Accountability and Transparency (FAccT), 2024
Alan Chan
Carson Ezell
Max Kaufmann
K. Wei
Lewis Hammond
...
Nitarshan Rajkumar
David M. Krueger
Noam Kolt
Lennart Heim
Markus Anderljung
777
79
0
23 Jan 2024
Graph Edits for Counterfactual Explanations: A comparative study
Graph Edits for Counterfactual Explanations: A comparative study
Angeliki Dimitriou
Nikolaos Chaidos
Maria Lymperaiou
Giorgos Stamou
BDL
257
0
0
21 Jan 2024
A comprehensive study on fidelity metrics for XAI
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
209
27
0
19 Jan 2024
Are self-explanations from Large Language Models faithful?
Are self-explanations from Large Language Models faithful?Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Andreas Madsen
Sarath Chandar
Siva Reddy
LRM
376
68
0
15 Jan 2024
Explainable Predictive Maintenance: A Survey of Current Methods,
  Challenges and Opportunities
Explainable Predictive Maintenance: A Survey of Current Methods, Challenges and OpportunitiesIEEE Access (IEEE Access), 2024
Logan Cummins
Alexander Sommers
Somayeh Bakhtiari Ramezani
Sudip Mittal
Joseph E. Jabour
Maria Seale
Shahram Rahimi
216
53
0
15 Jan 2024
Reliability and Interpretability in Science and Deep Learning
Reliability and Interpretability in Science and Deep Learning
Luigi Scorzato
261
16
0
14 Jan 2024
Relying on the Unreliable: The Impact of Language Models' Reluctance to
  Express Uncertainty
Relying on the Unreliable: The Impact of Language Models' Reluctance to Express UncertaintyAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Kaitlyn Zhou
Jena D. Hwang
Xiang Ren
Maarten Sap
335
103
0
12 Jan 2024
What should I say? -- Interacting with AI and Natural Language
  Interfaces
What should I say? -- Interacting with AI and Natural Language Interfaces
Mark Adkins
182
1
0
12 Jan 2024
Effects of Multimodal Explanations for Autonomous Driving on Driving
  Performance, Cognitive Load, Expertise, Confidence, and Trust
Effects of Multimodal Explanations for Autonomous Driving on Driving Performance, Cognitive Load, Expertise, Confidence, and TrustScientific Reports (Sci Rep), 2024
Robert Kaufman
Jean Costa
Everlyne Kimani
372
17
0
08 Jan 2024
Verifying Relational Explanations: A Probabilistic Approach
Verifying Relational Explanations: A Probabilistic Approach
Abisha Thapa Magar
Anup Shakya
Somdeb Sarkhel
Deepak Venugopal
182
1
0
05 Jan 2024
Towards Directive Explanations: Crafting Explainable AI Systems for
  Actionable Human-AI Interactions
Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
Aditya Bhattacharya
228
10
0
29 Dec 2023
Q-SENN: Quantized Self-Explaining Neural Networks
Q-SENN: Quantized Self-Explaining Neural Networks
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAttAAMLMILM
300
14
0
21 Dec 2023
Previous
123...678...252627
Next
Page 7 of 27
Pageof 27