ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.12652
  4. Cited By
How model accuracy and explanation fidelity influence user trust

How model accuracy and explanation fidelity influence user trust

26 July 2019
A. Papenmeier
G. Englebienne
C. Seifert
    FaML
ArXivPDFHTML

Papers citing "How model accuracy and explanation fidelity influence user trust"

25 / 25 papers shown
Title
Beware of "Explanations" of AI
Beware of "Explanations" of AI
David Martens
Galit Shmueli
Theodoros Evgeniou
Kevin Bauer
Christian Janiesch
...
Claudia Perlich
Wouter Verbeke
Alona Zharova
Patrick Zschech
F. Provost
36
0
0
09 Apr 2025
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
50
3
0
19 Sep 2024
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
48
4
0
29 Apr 2024
Designing for Complementarity: A Conceptual Framework to Go Beyond the
  Current Paradigm of Using XAI in Healthcare
Designing for Complementarity: A Conceptual Framework to Go Beyond the Current Paradigm of Using XAI in Healthcare
Elisa Rubegni
Omran Ayoub
Stefania Maria Rita Rizzo
Marco Barbero
G. Bernegger
Francesca Faraci
Francesca Mangili
Emiliano Soldini
P. Trimboli
Alessandro Facchini
34
1
0
06 Apr 2024
Influence based explainability of brain tumors segmentation in
  multimodal Magnetic Resonance Imaging
Influence based explainability of brain tumors segmentation in multimodal Magnetic Resonance Imaging
Tommaso Torda
Andrea Ciardiello
Simona Gargiulo
Greta Grillo
Simone Scardapane
Cecilia Voena
S. Giagu
34
0
0
05 Apr 2024
Trust, distrust, and appropriate reliance in (X)AI: a survey of
  empirical evaluation of user trust
Trust, distrust, and appropriate reliance in (X)AI: a survey of empirical evaluation of user trust
Roel W. Visser
Tobias M. Peters
Ingrid Scharlau
Barbara Hammer
29
5
0
04 Dec 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
42
4
0
21 Sep 2023
Automatic Textual Explanations of Concept Lattices
Automatic Textual Explanations of Concept Lattices
Johannes Hirth
Viktoria Horn
Gerd Stumme
Tom Hanika
LRM
18
1
0
17 Apr 2023
How Accurate Does It Feel? -- Human Perception of Different Types of
  Classification Mistakes
How Accurate Does It Feel? -- Human Perception of Different Types of Classification Mistakes
A. Papenmeier
Dagmar Kern
Daniel Hienert
Yvonne Kammerer
C. Seifert
36
19
0
13 Feb 2023
The political ideology of conversational AI: Converging evidence on
  ChatGPT's pro-environmental, left-libertarian orientation
The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation
Jochen Hartmann
Jasper Schwenzow
Maximilian Witte
22
203
0
05 Jan 2023
Do Not Trust a Model Because It is Confident: Uncovering and
  Characterizing Unknown Unknowns to Student Success Predictors in Online-Based
  Learning
Do Not Trust a Model Because It is Confident: Uncovering and Characterizing Unknown Unknowns to Student Success Predictors in Online-Based Learning
Roberta Galici
Tanja Käser
Gianni Fenu
Mirko Marras
36
6
0
16 Dec 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
29
6
0
19 Nov 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
45
78
0
06 May 2022
User Trust on an Explainable AI-based Medical Diagnosis Support System
User Trust on an Explainable AI-based Medical Diagnosis Support System
Yao Rong
N. Castner
Efe Bozkir
Enkelejda Kasneci
26
8
0
26 Apr 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
41
16
0
27 Jan 2022
Uncertainty Estimation and Out-of-Distribution Detection for
  Counterfactual Explanations: Pitfalls and Solutions
Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions
Eoin Delaney
Derek Greene
Mark T. Keane
38
24
0
20 Jul 2021
Pitfalls of Explainable ML: An Industry Perspective
Pitfalls of Explainable ML: An Industry Perspective
Sahil Verma
Aditya Lahiri
John P. Dickerson
Su-In Lee
XAI
21
9
0
14 Jun 2021
Assessing the Impact of Automated Suggestions on Decision Making: Domain
  Experts Mediate Model Errors but Take Less Initiative
Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative
A. Levy
Monica Agrawal
Arvind Satyanarayan
David Sontag
24
75
0
08 Mar 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
68
415
0
15 Feb 2021
The Role of Domain Expertise in User Trust and the Impact of First
  Impressions with Intelligent Systems
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani
J. King
Eric D. Ragan
25
99
0
20 Aug 2020
Machine Learning Explanations to Prevent Overtrust in Fake News
  Detection
Machine Learning Explanations to Prevent Overtrust in Fake News Detection
Sina Mohseni
Fan Yang
Shiva K. Pentyala
Mengnan Du
Yi Liu
Nic Lupfer
Xia Hu
Shuiwang Ji
Eric D. Ragan
21
41
0
24 Jul 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
17
17
0
05 May 2020
Deceptive AI Explanations: Creation and Detection
Deceptive AI Explanations: Creation and Detection
Johannes Schneider
Christian Meske
Michalis Vlachos
34
28
0
21 Jan 2020
AI for Explaining Decisions in Multi-Agent Environments
AI for Explaining Decisions in Multi-Agent Environments
Sarit Kraus
A. Azaria
J. Fiosina
Maike Greve
Noam Hazon
L. Kolbe
Tim-Benjamin Lembcke
J. P. Müller
Sören Schleibaum
M. Vollrath
33
40
0
10 Oct 2019
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine
  Learning
A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning
Sina Mohseni
Jeremy E. Block
Eric D. Ragan
FAtt
XAI
31
61
0
16 Jan 2018
1