ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
v1v2v3 (latest)

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,336 papers shown
Realistic Counterfactual Explanations for Machine Learning-Controlled Mobile Robots using 2D LiDAR
Realistic Counterfactual Explanations for Machine Learning-Controlled Mobile Robots using 2D LiDAREuropean Control Conference (ECC), 2025
Sindre Benjamin Remman
A. Lekkas
371
0
0
11 May 2025
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Dima Alattal
Asal Khoshravan Azar
P. Myles
Richard Branson
Hatim Abdulhussein
Allan Tucker
212
0
0
10 May 2025
See What I Mean? CUE: A Cognitive Model of Understanding Explanations
See What I Mean? CUE: A Cognitive Model of Understanding Explanations
Tobias Labarta
Nhi Hoang
Katharina Weitz
Wojciech Samek
Sebastian Lapuschkin
Leander Weber
242
0
0
09 May 2025
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) DecisionsProceedings of the ACM on Human-Computer Interaction (PACMHCI), 2025
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
194
0
0
09 May 2025
KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning
KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning
Stephen Richard Varey
A. D. Stefano
Anh Han
253
1
0
07 May 2025
Robustness questions the interpretability of graph neural networks: what to do?
Robustness questions the interpretability of graph neural networks: what to do?
Kirill Lukyanov
Georgii Sazonov
Serafim Boyarsky
Ilya Makarov
AAML
911
1
0
05 May 2025
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
A New Approach to Backtracking Counterfactual Explanations: A Unified Causal Framework for Efficient Model Interpretability
Pouria Fatemi
Ehsan Sharifian
Mohammad Hossein Yassaee
388
0
0
05 May 2025
xEEGNet: Towards Explainable AI in EEG Dementia Classification
xEEGNet: Towards Explainable AI in EEG Dementia ClassificationJournal of Neural Engineering (J. Neural Eng.), 2025
Andrea Zanola
Louis Fabrice Tshimanga
Federico Del Pup
Marco Baiesi
Manfredo Atzori
304
3
0
30 Apr 2025
Disjunctive and Conjunctive Normal Form Explanations of Clusters Using Auxiliary Information
Disjunctive and Conjunctive Normal Form Explanations of Clusters Using Auxiliary Information
Robert F. Downey
S. S. Ravi
190
0
0
29 Apr 2025
Enhancing Cell Counting through MLOps: A Structured Approach for Automated Cell Analysis
Enhancing Cell Counting through MLOps: A Structured Approach for Automated Cell Analysis
Matteo Testi
Luca Clissa
Matteo Ballabio
Salvatore Ricciardi
Federico Baldo
Emanuele Frontoni
S. Moccia
Gennario Vessio
200
0
0
28 Apr 2025
Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions
Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions
Salem Lahlou
275
0
0
28 Apr 2025
Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich SoftwareInternational Conference on Human Factors in Computing Systems (CHI), 2025
Anjali Khurana
Xiaotian Su
April Yi Wang
Parmit K. Chilana
243
2
0
22 Apr 2025
Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
Danial Hooshyar
Gustav Šír
Yeongwook Yang
Eve Kikas
Raija Hamalainen
T. Karkkainen
Dragan Gašević
Roger Azevedo
332
5
0
22 Apr 2025
Causal DAG Summarization (Full Version)
Causal DAG Summarization (Full Version)
Anna Zeng
Michael Cafarella
Batya Kenig
Markos Markakis
Brit Youngmann
Babak Salimi
CML
179
2
0
21 Apr 2025
ScholarMate: A Mixed-Initiative Tool for Qualitative Knowledge Work and Information Sensemaking
ScholarMate: A Mixed-Initiative Tool for Qualitative Knowledge Work and Information SensemakingSymposium on Human-Computer Interaction for Work (CHIWORK), 2025
Runlong Ye
Patrick Yung Kang Lee
Matthew Varona
Oliver Huang
Carolina Nobre
342
2
0
19 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
479
6
0
18 Apr 2025
AskQE: Question Answering as Automatic Evaluation for Machine Translation
AskQE: Question Answering as Automatic Evaluation for Machine TranslationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Dayeon Ki
Kevin Duh
Marine Carpuat
412
5
0
15 Apr 2025
A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust
A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust
Chameera De Silva
Thilina Halloluwa
Dhaval Vyas
233
0
0
14 Apr 2025
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
Asiful Arefeen
Saman Khamesian
Maria Adela Grando
Bithika Thompson
Hassan Ghasemzadeh
239
3
0
14 Apr 2025
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric EncryptionInternational Conference on Applied Cryptography and Network Security (ACNS), 2025
Marc Damie
Jean-Benoist Leger
Florian Hahn
Andreas Peter
AAML
220
1
0
14 Apr 2025
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-beingInternational Conference on Evaluation of Novel Approaches to Software Engineering (ENASE), 2025
Esperança Amengual-Alcover
Antoni Jaume-i-Capó
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antonia Paniza-Fullana
273
1
0
11 Apr 2025
Exploring the Effectiveness and Interpretability of Texts in LLM-based Time Series Models
Exploring the Effectiveness and Interpretability of Texts in LLM-based Time Series Models
Zhengke Sun
Hangwei Qian
Ivor Tsang
AI4TS
160
0
0
09 Apr 2025
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities
M. Domnich
Rasmus Moorits Veski
Julius Valja
Kadi Tulver
Raul Vicente
FAtt
270
0
0
07 Apr 2025
Improving Counterfactual Truthfulness for Molecular Property Prediction through Uncertainty Quantification
Improving Counterfactual Truthfulness for Molecular Property Prediction through Uncertainty Quantification
Jonas Teufel
Annika Leinweber
Pascal Friederich
397
4
0
03 Apr 2025
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE dataset
Rubrik's Cube: Testing a New Rubric for Evaluating Explanations on the CUBE datasetAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Diana Galván-Sosa
Gabrielle Gaudeau
Pride Kavumba
Yunmeng Li
Hongyi gu
Zheng Yuan
Keisuke Sakaguchi
P. Buttery
LRM
391
3
0
31 Mar 2025
Which LIME should I trust? Concepts, Challenges, and Solutions
Which LIME should I trust? Concepts, Challenges, and Solutions
Katharina Prasse
Sascha Marton
Udo Schlegel
Christian Bartelt
FAtt
428
8
0
31 Mar 2025
Interpretable Machine Learning in Physics: A Review
Interpretable Machine Learning in Physics: A Review
Sebastian Johann Wetzel
Seungwoong Ha
Raban Iten
Miriam Klopotek
Ziming Liu
AI4CE
395
14
0
30 Mar 2025
Exploring Explainable Multi-agent MCTS-minimax Hybrids in Board Game Using Process Mining
Exploring Explainable Multi-agent MCTS-minimax Hybrids in Board Game Using Process Mining
Yiyu Qian
Tim Miller
Zheng Qian
Liyuan Zhao
226
0
0
30 Mar 2025
Ranking Counterfactual Explanations
Ranking Counterfactual Explanations
Suryani Lim
H. Prade
G. Richard
CML
229
0
0
20 Mar 2025
Disentangling Fine-Tuning from Pre-Training in Visual Captioning with Hybrid Markov Logic
Disentangling Fine-Tuning from Pre-Training in Visual Captioning with Hybrid Markov LogicBigData Congress [Services Society] (BSS), 2024
Monika Shah
Somdeb Sarkhel
Deepak Venugopal
MLLMBDLVLM
327
1
0
18 Mar 2025
Interpretable Transformation and Analysis of Timelines through Learning via Surprisability
Interpretable Transformation and Analysis of Timelines through Learning via SurprisabilityChaos (Chaos), 2025
O. Mokryn
Teddy Lazebnik
Hagit Ben-Shoshan
AI4TS
320
2
0
06 Mar 2025
Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
Van Bach Nguyen
C. Seifert
Jorg Schlotterer
BDL
581
2
0
06 Mar 2025
Conceptual Contrastive Edits in Textual and Vision-Language Retrieval
Maria Lymperaiou
Giorgos Stamou
VLM
330
0
0
01 Mar 2025
Why Trust in AI May Be Inevitable
Why Trust in AI May Be Inevitable
Nghi Truong
Phanish Puranam
Ilia Testlin
174
0
0
28 Feb 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
QPM: Discrete Optimization for Globally Interpretable Image ClassificationInternational Conference on Learning Representations (ICLR), 2025
Thomas Norrenbrock
Timo Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
370
2
0
27 Feb 2025
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small DevicesInternational Conference on Intelligent User Interfaces (IUI), 2025
Xinru Wang
Mengjie Yu
Hannah Nguyen
Michael Iuzzolino
Tianyi Wang
...
Ting Zhang
Naveen Sendhilnathan
Hrvoje Benko
Haijun Xia
Tanya R. Jonker
191
2
0
26 Feb 2025
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
A Method for Evaluating the Interpretability of Machine Learning Models in Predicting Bond Default Risk Based on LIME and SHAP
Yan Zhang
Lin Chen
Yixiang Tian
FAtt
199
2
0
26 Feb 2025
Can LLMs Explain Themselves Counterfactually?
Can LLMs Explain Themselves Counterfactually?
Zahra Dehghanighobadi
Asja Fischer
Muhammad Bilal Zafar
LRM
416
2
0
25 Feb 2025
All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty
All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty
Kacper Sokol
Eyke Hüllermeier
361
3
0
24 Feb 2025
RobustX: Robust Counterfactual Explanations Made Easy
RobustX: Robust Counterfactual Explanations Made EasyInternational Joint Conference on Artificial Intelligence (IJCAI), 2024
Junqi Jiang
Luca Marzari
Aaryan Purohit
Francesco Leofante
CML
276
5
0
20 Feb 2025
Improving Chain-of-Thought Reasoning via Quasi-Symbolic Abstractions
Improving Chain-of-Thought Reasoning via Quasi-Symbolic AbstractionsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Leonardo Ranaldi
Marco Valentino
André Freitas
ReLMLRM
178
16
0
18 Feb 2025
Q-STRUM Debate: Query-Driven Contrastive Summarization for Recommendation Comparison
Q-STRUM Debate: Query-Driven Contrastive Summarization for Recommendation ComparisonAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
George Saad
Scott Sanner
210
0
0
18 Feb 2025
Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making
Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making
Yongsu Ahn
Yu-Ru Lin
Malihe Alikhani
Eunjeong Cheon
894
0
0
17 Feb 2025
A Scoresheet for Explainable AIAdaptive Agents and Multi-Agent Systems (AAMAS), 2025
Michael Winikoff
John Thangarajah
Sebastian Rodriguez
203
1
0
14 Feb 2025
Discovering Chunks in Neural Embeddings for Interpretability
Discovering Chunks in Neural Embeddings for Interpretability
Shuchen Wu
Stephan Alaniz
Eric Schulz
Zeynep Akata
291
0
0
03 Feb 2025
Symbolic Knowledge Extraction and Injection with Sub-symbolic Predictors: A Systematic Literature ReviewACM Computing Surveys (ACM CSUR), 2024
Giovanni Ciatto
Federico Sabbatini
Andrea Agiollo
Matteo Magnini
Andrea Omicini
256
28
0
28 Jan 2025
XEQ Scale for Evaluating XAI Experience Quality
XEQ Scale for Evaluating XAI Experience Quality
A. Wijekoon
Nirmalie Wiratunga
D. Corsar
Kyle Martin
Ikechukwu Nkisi-Orji
Belén Díaz-Agudo
Derek Bridge
500
2
0
20 Jan 2025
Evidential Deep Learning for Uncertainty Quantification and Out-of-Distribution Detection in Jet Identification using Deep Neural Networks
Evidential Deep Learning for Uncertainty Quantification and Out-of-Distribution Detection in Jet Identification using Deep Neural Networks
Ayush Khot
Xiwei Wang
Avik Roy
Volodymyr V. Kindratenko
Mark S. Neubauer
EDL
250
7
0
10 Jan 2025
Mechanistic understanding and validation of large AI models with SemanticLens
Mechanistic understanding and validation of large AI models with SemanticLens
Maximilian Dreyer
J. Berend
Tobias Labarta
Johanna Vielhaben
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
202
23
0
10 Jan 2025
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPRArtificial Intelligence and Law (AI & Law), 2025
Laura State
Alejandra Bringas Colmenarejo
Andrea Beretta
Salvatore Ruggieri
Franco Turini
Stephanie Law
AILawELM
273
1
0
10 Jan 2025
Previous
123456...252627
Next