ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,335 papers shown
Title
Explaining Tournament Solutions with Minimal Supports
Explaining Tournament Solutions with Minimal Supports
Clément Contet
Umberto Grandi
Jérome Mengin
FAtt
234
0
0
24 Dec 2025
A Framework for Causal Concept-based Model Explanations
A Framework for Causal Concept-based Model Explanations
Anna Rodum Bjøru
Jacob Lysnæs-Larsen
Oskar Jørgensen
Inga Strümke
H. Langseth
12
0
0
02 Dec 2025
Actionable and diverse counterfactual explanations incorporating domain knowledge and causal constraints
Actionable and diverse counterfactual explanations incorporating domain knowledge and causal constraints
Szymon Bobek
Łukasz Bałec
Grzegorz J. Nalepa
28
0
0
25 Nov 2025
Formal Abductive Latent Explanations for Prototype-Based Networks
Jules Soria
Zakaria Chihani
Julien Girard-Satabin
Alban Grastien
Romain Xu-Darme
Daniela Cancila
125
0
0
20 Nov 2025
Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations
Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations
Yehonatan Elisha
Seffi Cohen
Oren Barkan
Noam Koenigstein
FAtt
239
0
0
17 Nov 2025
MACIE: Multi-Agent Causal Intelligence Explainer for Collective Behavior Understanding
Abraham Itzhak Weinberg
CML
340
0
0
11 Nov 2025
Unlocking the Black Box: A Five-Dimensional Framework for Evaluating Explainable AI in Credit Risk
Unlocking the Black Box: A Five-Dimensional Framework for Evaluating Explainable AI in Credit Risk
Rongbin Ye
Jiaqi Chen
68
0
0
07 Nov 2025
Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations
Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations
Shivogo John
FAtt
382
6
0
05 Nov 2025
Retrofitters, pragmatists and activists: Public interest litigation for accountable automated decision-making
Retrofitters, pragmatists and activists: Public interest litigation for accountable automated decision-making
Henry Fraser
Zahra Stardust
61
0
0
05 Nov 2025
Variational Geometric Information Bottleneck: Learning the Shape of Understanding
Variational Geometric Information Bottleneck: Learning the Shape of Understanding
Ronald Katende
88
1
0
04 Nov 2025
llmSHAP: A Principled Approach to LLM Explainability
llmSHAP: A Principled Approach to LLM Explainability
Filip Naudot
Tobias Sundqvist
Timotheus Kampik
FAtt
259
0
0
03 Nov 2025
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Joshua S. Harvey
Guanchao Feng
Sai Anusha Meesala
Tina Zhao
Dhagash Mehta
FAttCML
325
0
0
31 Oct 2025
Survey of Multimodal Geospatial Foundation Models: Techniques, Applications, and Challenges
Survey of Multimodal Geospatial Foundation Models: Techniques, Applications, and Challenges
Liling Yang
Ning Chen
Jun Yue
Yidan Liu
Jiayi Ma
Pedram Ghamisi
Antonio J. Plaza
Leyuan Fang
AI4TS
148
0
0
27 Oct 2025
A Multi-level Analysis of Factors Associated with Student Performance: A Machine Learning Approach to the SAEB Microdata
A Multi-level Analysis of Factors Associated with Student Performance: A Machine Learning Approach to the SAEB Microdata
Rodrigo Tertulino
Ricardo Almeida
60
0
0
25 Oct 2025
Towards the Formalization of a Trustworthy AI for Mining Interpretable Models explOiting Sophisticated Algorithms
Towards the Formalization of a Trustworthy AI for Mining Interpretable Models explOiting Sophisticated Algorithms
Riccardo Guidotti
Martina Cinquini
Marta Marchiori Manerba
Mattia Setzu
Francesco Spinnato
112
0
0
23 Oct 2025
Leveraging Association Rules for Better Predictions and Better Explanations
Leveraging Association Rules for Better Predictions and Better Explanations
Gilles Audemard
S. Coste-Marquis
Pierre Marquis
Mehdi Sabiri
N. Szczepanski
108
0
0
21 Oct 2025
What Questions Should Robots Be Able to Answer? A Dataset of User Questions for Explainable Robotics
What Questions Should Robots Be Able to Answer? A Dataset of User Questions for Explainable Robotics
Lennart Wachowiak
Andrew Coles
Gerard Canal
Oya Celiktutan
LM&Ro
106
0
0
18 Oct 2025
Foundation and Large-Scale AI Models in Neuroscience: A Comprehensive Review
Foundation and Large-Scale AI Models in Neuroscience: A Comprehensive Review
Shihao Yang
Xiying Huang
Danilo Bernardo
J. Ding
Andrew Michael
Jingmei Yang
Patrick Kwan
Ashish Raj
Feng Liu
AI4CE
125
0
0
18 Oct 2025
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Allen Daniel Sunny
56
0
0
17 Oct 2025
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
Aline Mangold
Juliane Zietz
Susanne Weinhold
Sebastian Pannasch
XAIELM
139
0
0
14 Oct 2025
ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios
ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios
Mahika Phutane
Hayoung Jung
Matthew Kim
Tanushree Mitra
Aditya Vashistha
122
0
0
13 Oct 2025
Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives
Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives
Andrada Iulia Prajescu
Roberto Confalonieri
ELM
77
0
0
13 Oct 2025
Extended Triangular Method: A Generalized Algorithm for Contradiction Separation Based Automated Deduction
Extended Triangular Method: A Generalized Algorithm for Contradiction Separation Based Automated Deduction
Yang Xu
Shuwei Chen
Jun Liu
Feng Cao
Xingxing He
69
2
0
12 Oct 2025
From Explainability to Action: A Generative Operational Framework for Integrating XAI in Clinical Mental Health Screening
From Explainability to Action: A Generative Operational Framework for Integrating XAI in Clinical Mental Health Screening
Ratna Kandala
Akshata Kishore Moharir
Divya Arvinda Nayak
93
0
0
10 Oct 2025
Towards Meaningful Transparency in Civic AI Systems
Towards Meaningful Transparency in Civic AI Systems
Dave Murray-Rust
Kars Alfrink
Cristina Zaga
116
0
0
09 Oct 2025
Cluster Paths: Navigating Interpretability in Neural Networks
Cluster Paths: Navigating Interpretability in Neural Networks
Nicholas M. Kroeger
Vincent Bindschaedler
108
0
0
08 Oct 2025
Semantic Regexes: Auto-Interpreting LLM Features with a Structured Language
Semantic Regexes: Auto-Interpreting LLM Features with a Structured Language
Angie Boggust
Donghao Ren
Yannick Assogba
Dominik Moritz
Arvind Satyanarayan
Fred Hohman
116
0
0
07 Oct 2025
Reproducibility Study of "XRec: Large Language Models for Explainable Recommendation"
Reproducibility Study of "XRec: Large Language Models for Explainable Recommendation"
Ranjan Mishra
Julian I. Bibo
Quinten van Engelen
Henk Schaapman
LRM
88
0
0
06 Oct 2025
Does Using Counterfactual Help LLMs Explain Textual Importance in Classification?
Does Using Counterfactual Help LLMs Explain Textual Importance in Classification?
Nelvin Tan
James Asikin Cheung
Yu-Ching Shih
Dong Yang
Amol Salunkhe
92
0
0
05 Oct 2025
Kantian-Utilitarian XAI: Meta-Explained
Kantian-Utilitarian XAI: Meta-Explained
Zahra Atf
Peter Lewis
94
0
0
04 Oct 2025
From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments
From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments
Anna Trapp
Mersedeh Sadeghi
Andreas Vogelsang
88
0
0
03 Oct 2025
Evaluation Framework for Highlight Explanations of Context Utilisation in Language Models
Evaluation Framework for Highlight Explanations of Context Utilisation in Language Models
Jingyi Sun
Pepa Atanasova
Sagnik Ray Choudhury
Sekh Mainul Islam
Isabelle Augenstein
146
0
0
03 Oct 2025
Onto-Epistemological Analysis of AI Explanations
Onto-Epistemological Analysis of AI Explanations
Martina Mattioli
Eike Petersen
Aasa Feragen
Marcello Pelillo
Siavash Bigdeli
208
0
0
03 Oct 2025
The Unheard Alternative: Contrastive Explanations for Speech-to-Text Models
The Unheard Alternative: Contrastive Explanations for Speech-to-Text Models
Lina Conti
Dennis Fucci
Marco Gaido
Matteo Negri
Guillaume Wisniewski
L. Bentivogli
100
1
0
30 Sep 2025
Human-Centered Evaluation of RAG outputs: a framework and questionnaire for human-AI collaboration
Human-Centered Evaluation of RAG outputs: a framework and questionnaire for human-AI collaboration
Aline Mangold
Kiran Hoffmann
68
0
0
30 Sep 2025
Not All Explanations are Created Equal: Investigating the Pitfalls of Current XAI Evaluation
Not All Explanations are Created Equal: Investigating the Pitfalls of Current XAI Evaluation
Joe Shymanski
Jacob Brue
Sandip Sen
XAI
44
1
0
27 Sep 2025
Ontological foundations for contrastive explanatory narration of robot plans
Ontological foundations for contrastive explanatory narration of robot plans
Alberto Olivares-Alarcos
Sergi Foix
J´ulia Borras
Gerard Canal
Guillem Alenyà
52
0
0
26 Sep 2025
Efficient & Correct Predictive Equivalence for Decision Trees
Efficient & Correct Predictive Equivalence for Decision Trees
Joao Marques-Silva
Alexey Ignatiev
288
0
0
22 Sep 2025
Looking in the mirror: A faithful counterfactual explanation method for interpreting deep image classification models
Looking in the mirror: A faithful counterfactual explanation method for interpreting deep image classification models
T. Chowdhury
Vu Minh Hieu Phan
Kewen Liao
Nanyu Dong
Minh-Son To
Anton van den Hengel
Johan Verjans
Zhibin Liao
OOD
140
0
0
20 Sep 2025
Towards a Transparent and Interpretable AI Model for Medical Image Classifications
Towards a Transparent and Interpretable AI Model for Medical Image ClassificationsCognitive Neurodynamics (Cogn Neurodyn), 2025
Binbin Wen
Yihang Wu
Tareef Daqqaq
Ahmad Chaddad
95
0
0
20 Sep 2025
Secure Human Oversight of AI: Exploring the Attack Surface of Human Oversight
Secure Human Oversight of AI: Exploring the Attack Surface of Human Oversight
Jonas C. Ditz
Veronika Lazar
Elmar Lichtmeß
Carola Plesch
Matthias Heck
Kevin Baum
Markus Langer
AAML
150
0
0
15 Sep 2025
Abduct, Act, Predict: Scaffolding Causal Inference for Automated Failure Attribution in Multi-Agent Systems
Abduct, Act, Predict: Scaffolding Causal Inference for Automated Failure Attribution in Multi-Agent Systems
Alva West
Yixuan Weng
Minjun Zhu
Zhen Lin
Yue Zhang
Yue Zhang
131
3
0
12 Sep 2025
LLMs Don't Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations
LLMs Don't Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations
Harry Mayne
Ryan Kearns
Yushi Yang
Andrew M. Bean
Eoin Delaney
Chris Russell
Adam Mahdi
LRM
97
1
0
11 Sep 2025
An Interpretable Deep Learning Model for General Insurance Pricing
An Interpretable Deep Learning Model for General Insurance Pricing
P. Laub
Tu Pho
Bernard Wong
132
0
0
10 Sep 2025
Triadic Fusion of Cognitive, Functional, and Causal Dimensions for Explainable LLMs: The TAXAL Framework
Triadic Fusion of Cognitive, Functional, and Causal Dimensions for Explainable LLMs: The TAXAL Framework
David Herrera-Poyatos
Carlos Peláez-González
Cristina Zuheros
Virilo Tejedor
Rosana Montes
F. Herrera
72
0
0
05 Sep 2025
TalkToAgent: A Human-centric Explanation of Reinforcement Learning Agents with Large Language Models
TalkToAgent: A Human-centric Explanation of Reinforcement Learning Agents with Large Language Models
Haechang Kim
Hao Chen
Can Li
Jong Min Lee
LLMAG
136
0
0
05 Sep 2025
An Information-Flow Perspective on Explainability Requirements: Specification and Verification
An Information-Flow Perspective on Explainability Requirements: Specification and Verification
Bernd Finkbeiner
Hadar Frenkel
Julian Siber
142
1
0
01 Sep 2025
Ultra Strong Machine Learning: Teaching Humans Active Learning Strategies via Automated AI Explanations
Ultra Strong Machine Learning: Teaching Humans Active Learning Strategies via Automated AI Explanations
L. Ai
Johannes Langer
Ute Schmid
Stephen Muggleton
189
0
0
31 Aug 2025
Interestingness First Classifiers
Interestingness First Classifiers
Ryoma Sato
112
0
0
27 Aug 2025
Model Science: getting serious about verification, explanation and control of AI systems
Model Science: getting serious about verification, explanation and control of AI systems
Przemyslaw Biecek
Wojciech Samek
96
0
0
27 Aug 2025
1234...252627
Next