ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,335 papers shown
Title
Predictable Artificial Intelligence
Predictable Artificial Intelligence
Lexin Zhou
Pablo Antonio Moreno Casares
Fernando Martínez-Plumed
John Burden
Ryan Burnell
...
Seán Ó hÉigeartaigh
Danaja Rutar
Wout Schellaert
Konstantinos Voudouris
José Hernández-Orallo
429
6
0
08 Jan 2025
Citations and Trust in LLM Generated Responses
Yifan Ding
Matthew Facciani
Amrit Poudel
Ellen Joyce
Salvador Aguiñaga
Balaji Veeramani
Sanmitra Bhattacharya
Tim Weninger
HILM
286
11
0
03 Jan 2025
FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation
FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example GenerationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Qianli Wang
Nils Feldhus
Simon Ostermann
Luis Felipe Villa-Arenas
Sebastian Möller
Vera Schmitt
AAML
458
2
0
01 Jan 2025
Designing Visual Explanations and Learner Controls to Engage Adolescents
  in AI-Supported Exercise Selection
Designing Visual Explanations and Learner Controls to Engage Adolescents in AI-Supported Exercise SelectionInternational Conference on Learning Analytics and Knowledge (LAK), 2024
Jeroen Ooge
Arno Vanneste
Maxwell Szymanski
K. Verbert
192
1
0
20 Dec 2024
A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications
A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications
M. Islam
M. F. Mridha
Md Abrar Jahin
Nilanjan Dey
157
5
0
05 Dec 2024
Integrative CAM: Adaptive Layer Fusion for Comprehensive Interpretation
  of CNNs
Integrative CAM: Adaptive Layer Fusion for Comprehensive Interpretation of CNNs
Aniket K. Singh
Debasis Chaudhuri
Manish P. Singh
Samiran Chattopadhyay
227
4
0
02 Dec 2024
Interpreting Language Reward Models via Contrastive Explanations
Interpreting Language Reward Models via Contrastive ExplanationsInternational Conference on Learning Representations (ICLR), 2024
Junqi Jiang
Tom Bewley
Saumitra Mishra
Freddy Lecue
Manuela Veloso
445
5
0
25 Nov 2024
FG-CXR: A Radiologist-Aligned Gaze Dataset for Enhancing
  Interpretability in Chest X-Ray Report Generation
FG-CXR: A Radiologist-Aligned Gaze Dataset for Enhancing Interpretability in Chest X-Ray Report GenerationAsian Conference on Computer Vision (ACCV), 2024
Trong-Thang Pham
Ngoc-Vuong Ho
Nhat-Tan Bui
T. Phan
Patel Brijesh
...
Gianfranco Doretto
Anh Nguyen
Carol C. Wu
Hien Nguyen
Ngan Le
311
7
0
23 Nov 2024
Aligning Generalisation Between Humans and Machines
Aligning Generalisation Between Humans and Machines
Filip Ilievski
Barbara Hammer
F. V. Harmelen
Benjamin Paassen
S. Saralajew
...
Vered Shwartz
Gabriella Skitalinskaya
Clemens Stachl
Gido M. van de Ven
T. Villmann
649
4
0
23 Nov 2024
GraphXAIN: Narratives to Explain Graph Neural Networks
GraphXAIN: Narratives to Explain Graph Neural Networks
Mateusz Cedro
David Martens
460
5
0
04 Nov 2024
EXAGREE: Mitigating Explanation Disagreement with Stakeholder-Aligned Models
EXAGREE: Mitigating Explanation Disagreement with Stakeholder-Aligned Models
Sichao Li
Tommy Liu
Quanling Deng
Amanda S. Barnard
204
1
0
04 Nov 2024
A Mechanistic Explanatory Strategy for XAI
A Mechanistic Explanatory Strategy for XAI
Marcin Rabiza
252
3
0
02 Nov 2024
Explainable few-shot learning workflow for detecting invasive and exotic
  tree species
Explainable few-shot learning workflow for detecting invasive and exotic tree speciesScientific Reports (Sci Rep), 2024
Caroline M. Gevaert
Alexandra Aguiar Pedro
Ou Ku
Hao Cheng
Pranav Chandramouli
Farzaneh Dadrass Javan
Francesco Nattino
Sonja Georgievska
156
2
0
01 Nov 2024
Eliciting Critical Reasoning in Retrieval-Augmented Language Models via
  Contrastive Explanations
Eliciting Critical Reasoning in Retrieval-Augmented Language Models via Contrastive ExplanationsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Leonardo Ranaldi
Marco Valentino
André Freitas
RALMLRM
236
6
0
30 Oct 2024
Human-Readable Programs as Actors of Reinforcement Learning Agents Using
  Critic-Moderated Evolution
Human-Readable Programs as Actors of Reinforcement Learning Agents Using Critic-Moderated Evolution
Senne Deproost
Denis Steckelmacher
Ann Nowé
153
0
0
29 Oct 2024
Towards Human-centered Design of Explainable Artificial Intelligence
  (XAI): A Survey of Empirical Studies
Towards Human-centered Design of Explainable Artificial Intelligence (XAI): A Survey of Empirical Studies
Shuai Ma
246
5
0
28 Oct 2024
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric AssessmentsAAAI Conference on Artificial Intelligence (AAAI), 2024
M. Domnich
Julius Valja
Rasmus Moorits Veski
Giacomo Magnifico
Kadi Tulver
Eduard Barbu
Raul Vicente
LRMELM
343
4
0
28 Oct 2024
Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation
Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based EvaluationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Dongryeol Lee
Yerin Hwang
Yongil Kim
Joonsuk Park
Kyomin Jung
ELM
363
16
0
28 Oct 2024
Info-CELS: Informative Saliency Map Guided Counterfactual Explanation
Info-CELS: Informative Saliency Map Guided Counterfactual Explanation
Peiyu Li
O. Bahri
Pouya Hosseinzadeh
S. F. Boubrahimi
S. M. Hamdi
CMLLRM
358
3
0
27 Oct 2024
Evaluating the Influences of Explanation Style on Human-AI Reliance
Evaluating the Influences of Explanation Style on Human-AI Reliance
Emma Casolin
Flora D. Salim
Ben Newell
214
1
0
26 Oct 2024
Explaining Bayesian Networks in Natural Language using Factor Arguments.
  Evaluation in the medical domain
Explaining Bayesian Networks in Natural Language using Factor Arguments. Evaluation in the medical domain
Jaime Sevilla
Nikolay Babakov
Ehud Reiter
Alberto Bugarin
FAtt
117
3
0
23 Oct 2024
An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled
  Explanations of AI Systems
An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems
Shruthi Chari
186
0
0
23 Oct 2024
User-centric evaluation of explainability of AI with and for humans: a
  comprehensive empirical study
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Szymon Bobek
Paloma Korycińska
Monika Krakowska
Maciej Mozolewski
Dorota Rak
Magdalena Zych
Magdalena Wójcik
Grzegorz J. Nalepa
ELM
167
2
0
21 Oct 2024
Linking Model Intervention to Causal Interpretation in Model Explanation
Linking Model Intervention to Causal Interpretation in Model Explanation
Debo Cheng
Ziqi Xu
Jiuyong Li
Lin Liu
Kui Yu
T. Le
Jixue Liu
CML
267
0
0
21 Oct 2024
Dataset resulting from the user study on comprehensibility of explainable AI algorithms
Dataset resulting from the user study on comprehensibility of explainable AI algorithmsScientific Data (Sci Data), 2024
Szymon Bobek
Paloma Korycińska
Monika Krakowska
Maciej Mozolewski
Dorota Rak
Magdalena Zych
Magdalena Wójcik
Grzegorz J. Nalepa
54
0
0
21 Oct 2024
Human-Centric eXplainable AI in Education
Human-Centric eXplainable AI in Education
Subhankar Maity
Aniket Deroy
ELM
116
7
0
18 Oct 2024
HR-Bandit: Human-AI Collaborated Linear Recourse Bandit
HR-Bandit: Human-AI Collaborated Linear Recourse BanditInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Junyu Cao
Ruijiang Gao
Esmaeil Keyvanshokooh
502
4
0
18 Oct 2024
SSET: Swapping-Sliding Explanation for Time Series Classifiers in Affect
  Detection
SSET: Swapping-Sliding Explanation for Time Series Classifiers in Affect Detection
Nazanin Fouladgar
Marjan Alirezaie
Kary Främling
AI4TSFAtt
196
0
0
16 Oct 2024
Generating Global and Local Explanations for Tree-Ensemble Learning
  Methods by Answer Set Programming
Generating Global and Local Explanations for Tree-Ensemble Learning Methods by Answer Set ProgrammingTheory and Practice of Logic Programming (TPLP), 2024
A. Takemura
Katsumi Inoue
140
2
0
14 Oct 2024
"I think you need help! Here's why": Understanding the Effect of
  Explanations on Automatic Facial Expression Recognition
"I think you need help! Here's why": Understanding the Effect of Explanations on Automatic Facial Expression RecognitionAffective Computing and Intelligent Interaction (ACII), 2024
Sanjeev Nahulanthran
Mor Vered
Leimin Tian
Dana Kulić
131
0
0
13 Oct 2024
CE-MRS: Contrastive Explanations for Multi-Robot Systems
CE-MRS: Contrastive Explanations for Multi-Robot SystemsIEEE Robotics and Automation Letters (RA-L), 2024
Ethan Schneider
Daniel Wu
Devleena Das
Sonia Chernova
130
0
0
10 Oct 2024
Understanding with toy surrogate models in machine learning
Understanding with toy surrogate models in machine learning
Andrés Páez
SyDa
180
2
0
08 Oct 2024
Explanation sensitivity to the randomness of large language models: the
  case of journalistic text classification
Explanation sensitivity to the randomness of large language models: the case of journalistic text classification
Jérémie Bogaert
Marie-Catherine de Marneffe
Antonin Descampe
Louis Escouflaire
Cedrick Fairon
François-Xavier Standaert
314
3
0
07 Oct 2024
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making SkillsInternational Conference on Human Factors in Computing Systems (CHI), 2024
Zana Buçinca
S. Swaroop
Amanda E. Paluch
Finale Doshi-Velez
Krzysztof Z. Gajos
278
10
0
05 Oct 2024
An action language-based formalisation of an abstract argumentation
  framework
An action language-based formalisation of an abstract argumentation framework
Yann Munro
Camilo Sarmiento
Isabelle Bloch
Gauvain Bourgne
Catherine Pelachaud
Marie-Jeanne Lesot
129
0
0
29 Sep 2024
Trustworthy AI: Securing Sensitive Data in Large Language Models
Trustworthy AI: Securing Sensitive Data in Large Language ModelsApplied Informatics (AI), 2024
G. Feretzakis
V. Verykios
185
33
0
26 Sep 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP ExplanationsBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2024
Supriya Manna
Niladri Sett
AAML
292
3
0
26 Sep 2024
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
TDI
1.3K
2
0
25 Sep 2024
Explainable and Human-Grounded AI for Decision Support Systems: The
  Theory of Epistemic Quasi-Partnerships
Explainable and Human-Grounded AI for Decision Support Systems: The Theory of Epistemic Quasi-Partnerships
John Dorsch
Maximilian Moll
190
4
0
23 Sep 2024
A User Study on Contrastive Explanations for Multi-Effector Temporal
  Planning with Non-Stationary Costs
A User Study on Contrastive Explanations for Multi-Effector Temporal Planning with Non-Stationary CostsIEEE International Conference on Tools with Artificial Intelligence (ICTAI), 2024
Xiaowei Liu
Kevin McAreavey
Weiru Liu
137
1
0
20 Sep 2024
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
195
6
0
19 Sep 2024
Abductive explanations of classifiers under constraints: Complexity and
  properties
Abductive explanations of classifiers under constraints: Complexity and propertiesEuropean Conference on Artificial Intelligence (ECAI), 2024
Martin Cooper
Leila Amgoud
161
9
0
18 Sep 2024
Explaining Non-monotonic Normative Reasoning using Argumentation Theory
  with Deontic Logic
Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic
Zhe Yu
Yiwei Lu
40
2
0
18 Sep 2024
Enhancing Security Testing Software for Systems that Cannot be Subjected
  to the Risks of Penetration Testing Through the Incorporation of
  Multi-threading and and Other Capabilities
Enhancing Security Testing Software for Systems that Cannot be Subjected to the Risks of Penetration Testing Through the Incorporation of Multi-threading and and Other Capabilities
Matthew Tassava
Cameron Kolodjski
Jordan Milbrath
Jeremy Straub
180
1
0
17 Sep 2024
Questioning AI: Promoting Decision-Making Autonomy Through Reflection
Questioning AI: Promoting Decision-Making Autonomy Through Reflection
Simon WS Fischer
104
1
0
16 Sep 2024
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
Hua Shen
Tiffany Knearem
Reshmi Ghosh
Yu-Ju Yang
Nicholas Clark
Tanushree Mitra
Yun Huang
268
0
0
15 Sep 2024
Enumerating Minimal Unsatisfiable Cores of LTLf formulas
Enumerating Minimal Unsatisfiable Cores of LTLf formulas
Antonio Ielo
Giuseppe Mazzotta
Rafael Peñaloza
Francesco Ricca
LRM
64
0
0
14 Sep 2024
Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning
Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine LearningInternational Conference on Human Factors in Computing Systems (CHI), 2024
Robert Kaufman
Emi Lee
Manas Satish Bedmutha
David Kirsh
Nadir Weibel
213
4
0
13 Sep 2024
Explainable AI: Definition and attributes of a good explanation for
  health AI
Explainable AI: Definition and attributes of a good explanation for health AIAI and Ethics (AI & Ethics), 2024
E. Kyrimi
S. McLachlan
Jared M Wohlgemut
Zane B Perkins
David A. Lagnado
W. Marsh
the ExAIDSS Expert Group
XAI
200
1
0
09 Sep 2024
Interpretable Clustering: A Survey
Interpretable Clustering: A Survey
Lianyu Hu
Mudi Jiang
Junjie Dong
Xinying Liu
Zengyou He
254
7
0
01 Sep 2024
Previous
12345...252627
Next