ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.07901
  4. Cited By
On Human Predictions with Explanations and Predictions of Machine
  Learning Models: A Case Study on Deception Detection

On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection

19 November 2018
Vivian Lai
Chenhao Tan
ArXivPDFHTML

Papers citing "On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection"

50 / 50 papers shown
Title
Eye Movements as Indicators of Deception: A Machine Learning Approach
Eye Movements as Indicators of Deception: A Machine Learning Approach
Valentin Foucher
Santiago de Leon-Martinez
Robert Moro
23
0
0
05 May 2025
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions
Federico Maria Cau
Lucio Davide Spano
31
0
0
02 May 2025
The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making
The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making
Takehiro Takayanagi
Ryuji Hashimoto
Chung-Chi Chen
Kiyoshi Izumi
52
0
0
21 Feb 2025
The Value of Information in Human-AI Decision-making
The Value of Information in Human-AI Decision-making
Ziyang Guo
Yifan Wu
Jason D. Hartline
Jessica Hullman
FAtt
54
0
0
10 Feb 2025
Unexploited Information Value in Human-AI Collaboration
Unexploited Information Value in Human-AI Collaboration
Ziyang Guo
Yifan Wu
Jason D. Hartline
Jessica Hullman
29
1
0
03 Nov 2024
Interactive Example-based Explanations to Improve Health Professionals'
  Onboarding with AI for Human-AI Collaborative Decision Making
Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making
Min Hun Lee
Renee Bao Xuan Ng
Silvana Xin Yi Choo
S. Thilarajah
26
0
0
24 Sep 2024
Misfitting With AI: How Blind People Verify and Contest AI Errors
Misfitting With AI: How Blind People Verify and Contest AI Errors
Rahaf Alharbi
P. Lor
Jaylin Herskovitz
S. Schoenebeck
Robin Brewer
31
10
0
13 Aug 2024
Whether to trust: the ML leap of faith
Whether to trust: the ML leap of faith
Tory Frame
Sahraoui Dhelim
George Stothart
E. Coulthard
33
0
0
17 Jul 2024
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Shuai Ma
Qiaoyi Chen
Xinru Wang
Chengbo Zheng
Zhenhui Peng
Ming Yin
Xiaojuan Ma
ELM
26
20
0
25 Mar 2024
Software Doping Analysis for Human Oversight
Software Doping Analysis for Human Oversight
Sebastian Biewer
Kevin Baum
Sarah Sterz
Holger Hermanns
Sven Hetmank
Markus Langer
Anne Lauber-Rönsberg
Franz Lehr
18
4
0
11 Aug 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
19
61
0
12 May 2023
Artifact magnification on deepfake videos increases human detection and
  subjective confidence
Artifact magnification on deepfake videos increases human detection and subjective confidence
Emilie Josephs
Camilo Luciano Fosco
A. Oliva
27
6
0
10 Apr 2023
Towards Explainable AI Writing Assistants for Non-native English
  Speakers
Towards Explainable AI Writing Assistants for Non-native English Speakers
Yewon Kim
Mina Lee
Donghwi Kim
Sung-Ju Lee
11
4
0
05 Apr 2023
Learning Human-Compatible Representations for Case-Based Decision
  Support
Learning Human-Compatible Representations for Case-Based Decision Support
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
18
4
0
06 Mar 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
14
96
0
04 Feb 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
36
103
0
18 Jan 2023
Improving Human-AI Collaboration With Descriptions of AI Behavior
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
20
34
0
06 Jan 2023
A Human-ML Collaboration Framework for Improving Video Content Reviews
A Human-ML Collaboration Framework for Improving Video Content Reviews
Meghana Deodhar
Xiao Ma
Yixin Cai
Alex Koes
Alex Beutel
Jilin Chen
23
3
0
18 Oct 2022
Learning When to Advise Human Decision Makers
Learning When to Advise Human Decision Makers
Gali Noti
Yiling Chen
39
15
0
27 Sep 2022
Advancing Human-AI Complementarity: The Impact of User Expertise and
  Algorithmic Tuning on Joint Decision Making
Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making
K. Inkpen
Shreya Chappidi
Keri Mallari
Besmira Nushi
Divya Ramesh
Pietro Michelucci
Vani Mandava
Libuvse Hannah Vepvrek
Gabrielle Quinn
23
45
0
16 Aug 2022
Beware the Rationalization Trap! When Language Model Explainability
  Diverges from our Mental Models of Language
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language
R. Sevastjanova
Mennatallah El-Assady
LRM
27
9
0
14 Jul 2022
Justifying Social-Choice Mechanism Outcome for Improving Participant
  Satisfaction
Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction
Sharadhi Alape Suryanarayana
David Sarne
Bar-Ilan
11
7
0
24 May 2022
Argumentative Explanations for Pattern-Based Text Classifiers
Argumentative Explanations for Pattern-Based Text Classifiers
Piyawat Lertvittayakumjorn
Francesca Toni
27
4
0
22 May 2022
"If it didn't happen, why would I change my decision?": How Judges
  Respond to Counterfactual Explanations for the Public Safety Assessment
"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
Yaniv Yacoby
Ben Green
Christopher L. Griffin
Finale Doshi Velez
16
16
0
11 May 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in
  Human-AI Decision-Making
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
19
55
0
10 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
18
129
0
25 Apr 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
20
1
0
07 Mar 2022
Better Together? An Evaluation of AI-Supported Code Translation
Better Together? An Evaluation of AI-Supported Code Translation
Justin D. Weisz
Michael J. Muller
Steven I. Ross
Fernando Martinez
Stephanie Houde
Mayank Agarwal
Kartik Talamadupula
John T. Richards
29
67
0
15 Feb 2022
Causal effect of racial bias in data and machine learning algorithms on user persuasiveness & discriminatory decision making: An Empirical Study
Kinshuk Sengupta
Praveen Ranjan Srivastava
28
6
0
22 Jan 2022
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
30
38
0
17 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
114
0
06 Dec 2021
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
Intelligent Decision Assistance Versus Automated Decision-Making:
  Enhancing Knowledge Work Through Explainable Artificial Intelligence
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence
Max Schemmer
Niklas Kühl
G. Satzger
16
13
0
28 Sep 2021
Decision-Focused Summarization
Decision-Focused Summarization
Chao-Chun Hsu
Chenhao Tan
23
16
0
14 Sep 2021
The Impact of Algorithmic Risk Assessments on Human Predictions and its
  Analysis via Crowdsourcing Studies
The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies
Riccardo Fogliato
Alexandra Chouldechova
Zachary Chase Lipton
24
31
0
03 Sep 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
28
79
0
30 Apr 2021
Increasing the Speed and Accuracy of Data LabelingThrough an AI Assisted
  Interface
Increasing the Speed and Accuracy of Data LabelingThrough an AI Assisted Interface
Michael Desmond
Zahra Ashktorab
Michelle Brachman
Kristina Brimijoin
E. Duesterwald
...
Catherine Finegan-Dollak
Michael J. Muller
N. Joshi
Qian Pan
Aabhas Sharma
21
50
0
09 Apr 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
32
25
0
17 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
126
119
0
21 Jan 2021
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
  Decision-making
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
Charvi Rastogi
Yunfeng Zhang
Dennis L. Wei
Kush R. Varshney
Amit Dhurandhar
Richard J. Tomsett
HAI
27
108
0
15 Oct 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
23
575
0
26 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
14
93
0
19 Jun 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
22
197
0
03 Feb 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
13
138
0
14 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
35
701
0
08 Jan 2020
Visual Interaction with Deep Learning Models through Collaborative
  Semantic Inference
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
13
57
0
24 Jul 2019
Learning Representations by Humans, for Humans
Learning Representations by Humans, for Humans
Sophie Hilgard
Nir Rosenfeld
M. Banaji
Jack Cao
David C. Parkes
OCL
HAI
AI4CE
26
29
0
29 May 2019
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of
  Task Delegability
Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability
Brian Lubars
Chenhao Tan
14
73
0
08 Feb 2019
1