ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.07269
  4. Cited By
Explanation in Artificial Intelligence: Insights from the Social
  Sciences

Explanation in Artificial Intelligence: Insights from the Social Sciences

22 June 2017
Tim Miller
    XAI
ArXivPDFHTML

Papers citing "Explanation in Artificial Intelligence: Insights from the Social Sciences"

50 / 1,242 papers shown
Title
Reliability and Interpretability in Science and Deep Learning
Reliability and Interpretability in Science and Deep Learning
Luigi Scorzato
36
3
0
14 Jan 2024
Relying on the Unreliable: The Impact of Language Models' Reluctance to
  Express Uncertainty
Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty
Kaitlyn Zhou
Jena D. Hwang
Xiang Ren
Maarten Sap
36
54
0
12 Jan 2024
What should I say? -- Interacting with AI and Natural Language
  Interfaces
What should I say? -- Interacting with AI and Natural Language Interfaces
Mark Adkins
25
0
0
12 Jan 2024
Effects of Multimodal Explanations for Autonomous Driving on Driving
  Performance, Cognitive Load, Expertise, Confidence, and Trust
Effects of Multimodal Explanations for Autonomous Driving on Driving Performance, Cognitive Load, Expertise, Confidence, and Trust
Robert Kaufman
Jean Costa
Everlyne Kimani
64
7
0
08 Jan 2024
Verifying Relational Explanations: A Probabilistic Approach
Verifying Relational Explanations: A Probabilistic Approach
Abisha Thapa Magar
Anup Shakya
Somdeb Sarkhel
Deepak Venugopal
20
0
0
05 Jan 2024
Towards Directive Explanations: Crafting Explainable AI Systems for
  Actionable Human-AI Interactions
Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
Aditya Bhattacharya
26
7
0
29 Dec 2023
Q-SENN: Quantized Self-Explaining Neural Networks
Q-SENN: Quantized Self-Explaining Neural Networks
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
AAML
MILM
39
6
0
21 Dec 2023
Online Handbook of Argumentation for AI: Volume 4
Online Handbook of Argumentation for AI: Volume 4
Lars Bengel
Lydia Blümel
Elfia Bezou-Vrakatseli
Federico Castagna
Giulia DÁgostino
...
Daphne Odekerken
Fabrizio Russo
Stefan Sarkadi
Madeleine Waller
A. Xydis
41
0
0
20 Dec 2023
Explainable artificial intelligence approaches for brain-computer
  interfaces: a review and design space
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space
Param S. Rajpura
H. Cecotti
Y. Meena
34
6
0
20 Dec 2023
Probabilistic Prediction of Longitudinal Trajectory Considering Driving
  Heterogeneity with Interpretability
Probabilistic Prediction of Longitudinal Trajectory Considering Driving Heterogeneity with Interpretability
Shuli Wang
Kun Gao
Lanfang Zhang
Yang Liu
Lei Chen
44
4
0
19 Dec 2023
Explaining Reinforcement Learning Agents Through Counterfactual Action
  Outcomes
Explaining Reinforcement Learning Agents Through Counterfactual Action Outcomes
Yotam Amitai
Yael Septon
Ofra Amir
CML
38
5
0
18 Dec 2023
The Pros and Cons of Adversarial Robustness
The Pros and Cons of Adversarial Robustness
Yacine Izza
Sasha Rubin
AAML
35
1
0
18 Dec 2023
The Metacognitive Demands and Opportunities of Generative AI
The Metacognitive Demands and Opportunities of Generative AI
Lev Tankelevitch
Viktor Kewenig
Auste Simkute
A. E. Scott
Advait Sarkar
Abigail Sellen
Sean Rintel
AI4CE
38
99
0
18 Dec 2023
Evaluative Item-Contrastive Explanations in Rankings
Evaluative Item-Contrastive Explanations in Rankings
Alessandro Castelnovo
Riccardo Crupi
Nicolo Mombelli
Gabriele Nanino
D. Regoli
XAI
ELM
33
1
0
14 Dec 2023
Clash of the Explainers: Argumentation for Context-Appropriate
  Explanations
Clash of the Explainers: Argumentation for Context-Appropriate Explanations
Leila Methnani
Virginia Dignum
Andreas Theodorou
21
0
0
12 Dec 2023
Anytime Approximate Formal Feature Attribution
Anytime Approximate Formal Feature Attribution
Jinqiang Yu
Graham Farr
Alexey Ignatiev
Peter J. Stuckey
37
2
0
12 Dec 2023
"I Want It That Way": Enabling Interactive Decision Support Using Large
  Language Models and Constraint Programming
"I Want It That Way": Enabling Interactive Decision Support Using Large Language Models and Constraint Programming
Connor Lawless
Jakob Schoeffer
Lindy Le
Kael Rowan
Shilad Sen
Cristina St. Hill
Jina Suh
Bahar Sarrafzadeh
46
8
0
12 Dec 2023
Explain To Decide: A Human-Centric Review on the Role of Explainable
  Artificial Intelligence in AI-assisted Decision Making
Explain To Decide: A Human-Centric Review on the Role of Explainable Artificial Intelligence in AI-assisted Decision Making
Milad Rogha
41
0
0
11 Dec 2023
Is Feedback All You Need? Leveraging Natural Language Feedback in
  Goal-Conditioned Reinforcement Learning
Is Feedback All You Need? Leveraging Natural Language Feedback in Goal-Conditioned Reinforcement Learning
Sabrina McCallum
Max Taylor-Davies
Stefano V. Albrecht
Alessandro Suglia
26
1
0
07 Dec 2023
Enhancing the Rationale-Input Alignment for Self-explaining
  Rationalization
Enhancing the Rationale-Input Alignment for Self-explaining Rationalization
Wei Liu
Yining Qi
Jun Wang
Zhiying Deng
Yuankai Zhang
Chengwei Wang
Ruixuan Li
43
9
0
07 Dec 2023
Explaining with Contrastive Phrasal Highlighting: A Case Study in
  Assisting Humans to Detect Translation Differences
Explaining with Contrastive Phrasal Highlighting: A Case Study in Assisting Humans to Detect Translation Differences
Eleftheria Briakou
Navita Goyal
Marine Carpuat
38
1
0
04 Dec 2023
Understanding Your Agent: Leveraging Large Language Models for Behavior
  Explanation
Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation
Xijia Zhang
Yue (Sophie) Guo
Simon Stepputtis
Katia Sycara
Joseph Campbell
LLMAG
LM&Ro
39
1
0
29 Nov 2023
Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models
  for AI Planning
Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models for AI Planning
Turgay Caglar
Sirine Belhaj
Tathagata Chakraborti
Michael Katz
S. Sreedharan
LRM
LLMAG
16
4
0
22 Nov 2023
Trustworthy AI: Deciding What to Decide
Trustworthy AI: Deciding What to Decide
Caesar Wu
Yuan-Fang Li
Jian Li
Jingjing Xu
Pascal Bouvry
49
3
0
21 Nov 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
36
0
0
20 Nov 2023
The Rise of the AI Co-Pilot: Lessons for Design from Aviation and Beyond
The Rise of the AI Co-Pilot: Lessons for Design from Aviation and Beyond
Abigail Sellen
Eric Horvitz
39
20
0
16 Nov 2023
Forms of Understanding of XAI-Explanations
Forms of Understanding of XAI-Explanations
Hendrik Buschmeier
H. M. Buhl
Friederike Kern
Angela Grimminger
Helen Beierling
...
Lutz Terfloth
Anna-Lisa Vollmer
Yu Wang
Annedore Wilmes
Britta Wrede
XAI
26
10
0
15 Nov 2023
Explain-then-Translate: An Analysis on Improving Program Translation
  with Self-generated Explanations
Explain-then-Translate: An Analysis on Improving Program Translation with Self-generated Explanations
Zilu Tang
Mayank Agarwal
Alex Shypula
Bailin Wang
Derry Wijaya
Jie Chen
Yoon Kim
LRM
37
16
0
13 Nov 2023
Is Machine Learning Unsafe and Irresponsible in Social Sciences?
  Paradoxes and Reconsidering from Recidivism Prediction Tasks
Is Machine Learning Unsafe and Irresponsible in Social Sciences? Paradoxes and Reconsidering from Recidivism Prediction Tasks
Jianhong Liu
D. Li
21
1
0
11 Nov 2023
On the Multiple Roles of Ontologies in Explainable AI
On the Multiple Roles of Ontologies in Explainable AI
Roberto Confalonieri
G. Guizzardi
25
5
0
08 Nov 2023
Extracting human interpretable structure-property relationships in
  chemistry using XAI and large language models
Extracting human interpretable structure-property relationships in chemistry using XAI and large language models
Geemi P Wellawatte
Philippe Schwaller
32
5
0
07 Nov 2023
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
  Ground Truth Explanations Datasets
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
XAI
52
11
0
03 Nov 2023
Notion of Explainable Artificial Intelligence -- An Empirical
  Investigation from A Users Perspective
Notion of Explainable Artificial Intelligence -- An Empirical Investigation from A Users Perspective
A. Haque
A. Najmul Islam
Patrick Mikalef
41
1
0
01 Nov 2023
Will Code Remain a Relevant User Interface for End-User Programming with
  Generative AI Models?
Will Code Remain a Relevant User Interface for End-User Programming with Generative AI Models?
Advait Sarkar
39
16
0
01 Nov 2023
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
  Challenges and Interdisciplinary Research Directions
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Luca Longo
Mario Brcic
Federico Cabitza
Jaesik Choi
Roberto Confalonieri
...
Andrés Páez
Wojciech Samek
Johannes Schneider
Timo Speith
Simone Stumpf
39
192
0
30 Oct 2023
"Honey, Tell Me What's Wrong", Global Explanation of Textual
  Discriminative Models through Cooperative Generation
"Honey, Tell Me What's Wrong", Global Explanation of Textual Discriminative Models through Cooperative Generation
Antoine Chaffin
Julien Delaunay
18
0
0
27 Oct 2023
On General Language Understanding
On General Language Understanding
David Schlangen
51
1
0
27 Oct 2023
Physician Detection of Clinical Harm in Machine Translation: Quality
  Estimation Aids in Reliance and Backtranslation Identifies Critical Errors
Physician Detection of Clinical Harm in Machine Translation: Quality Estimation Aids in Reliance and Backtranslation Identifies Critical Errors
Nikita Mehandru
Sweta Agrawal
Yimin Xiao
Elaine C. Khoong
Ge Gao
Marine Carpuat
Niloufar Salehi
37
10
0
25 Oct 2023
Human-centred explanation of rule-based decision-making systems in the
  legal domain
Human-centred explanation of rule-based decision-making systems in the legal domain
Suzan Zuurmond
AnneMarie Borg
M. V. Kempen
Remi Wieten
8
1
0
25 Oct 2023
On the stability, correctness and plausibility of visual explanation
  methods based on feature importance
On the stability, correctness and plausibility of visual explanation methods based on feature importance
Romain Xu-Darme
Jenny Benois-Pineau
R. Giot
Georges Quénot
Zakaria Chihani
M. Rousset
Alexey Zhukov
XAI
FAtt
29
1
0
25 Oct 2023
Faithful Path Language Modeling for Explainable Recommendation over
  Knowledge Graph
Faithful Path Language Modeling for Explainable Recommendation over Knowledge Graph
Giacomo Balloccu
Ludovico Boratto
Christian Cancedda
Gianni Fenu
Mirko Marras
24
6
0
25 Oct 2023
The WHY in Business Processes: Discovery of Causal Execution
  Dependencies
The WHY in Business Processes: Discovery of Causal Execution Dependencies
Fabiana Fournier
Lior Limonad
Inna Skarbovsky
Yuval David
31
2
0
23 Oct 2023
XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series
  Classification
XTSC-Bench: Quantitative Benchmarking for Explainers on Time Series Classification
Jacqueline Höllig
Steffen Thoma
Florian Grimm
AI4TS
22
1
0
23 Oct 2023
Explainable Depression Symptom Detection in Social Media
Explainable Depression Symptom Detection in Social Media
Eliseo Bao Souto
Anxo Perez
Javier Parapar
34
5
0
20 Oct 2023
Generating collective counterfactual explanations in score-based
  classification via mathematical optimization
Generating collective counterfactual explanations in score-based classification via mathematical optimization
E. Carrizosa
Jasone Ramírez-Ayerbe
Dolores Romero Morales
47
18
0
19 Oct 2023
Large Language Models Help Humans Verify Truthfulness -- Except When
  They Are Convincingly Wrong
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Chenglei Si
Navita Goyal
Sherry Tongshuang Wu
Chen Zhao
Shi Feng
Hal Daumé
Jordan L. Boyd-Graber
LRM
52
39
0
19 Oct 2023
Rather a Nurse than a Physician -- Contrastive Explanations under
  Investigation
Rather a Nurse than a Physician -- Contrastive Explanations under Investigation
Oliver Eberle
Ilias Chalkidis
Laura Cabello
Stephanie Brandl
33
9
0
18 Oct 2023
Scene Text Recognition Models Explainability Using Local Features
Scene Text Recognition Models Explainability Using Local Features
M. Ty
Rowel Atienza
39
1
0
14 Oct 2023
An Information Bottleneck Characterization of the Understanding-Workload
  Tradeoff
An Information Bottleneck Characterization of the Understanding-Workload Tradeoff
Lindsay M. Sanneman
Mycal Tucker
Julie A. Shah
37
2
0
11 Oct 2023
InterroLang: Exploring NLP Models and Datasets through Dialogue-based
  Explanations
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
Nils Feldhus
Qianli Wang
Tatiana Anikina
Sahil Chopra
Cennet Oguz
Sebastian Möller
45
11
0
09 Oct 2023
Previous
123...567...232425
Next