ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.06473
  4. Cited By
"How do I fool you?": Manipulating User Trust via Misleading Black Box
  Explanations

"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations

15 November 2019
Himabindu Lakkaraju
Osbert Bastani
ArXivPDFHTML

Papers citing ""How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations"

50 / 57 papers shown
Title
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
31
0
0
09 May 2025
Towards Responsible and Trustworthy Educational Data Mining: Comparing Symbolic, Sub-Symbolic, and Neural-Symbolic AI Methods
Towards Responsible and Trustworthy Educational Data Mining: Comparing Symbolic, Sub-Symbolic, and Neural-Symbolic AI Methods
Danial Hooshyar
Eve Kikas
Yeongwook Yang
Gustav Šír
Raija Hamalainen
T. Karkkainen
Roger Azevedo
64
1
0
01 Apr 2025
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
50
3
0
19 Sep 2024
Algebraic Adversarial Attacks on Integrated Gradients
Algebraic Adversarial Attacks on Integrated Gradients
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
SILM
AAML
69
2
0
23 Jul 2024
Efficient Exploration of the Rashomon Set of Rule Set Models
Efficient Exploration of the Rashomon Set of Rule Set Models
Martino Ciaperoni
Han Xiao
Aristides Gionis
36
3
0
05 Jun 2024
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
48
4
0
29 Apr 2024
Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI
  Research
Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research
Susanne Hindennach
Lei Shi
Filip Miletić
Andreas Bulling
22
4
0
19 Dec 2023
Trust, distrust, and appropriate reliance in (X)AI: a survey of
  empirical evaluation of user trust
Trust, distrust, and appropriate reliance in (X)AI: a survey of empirical evaluation of user trust
Roel W. Visser
Tobias M. Peters
Ingrid Scharlau
Barbara Hammer
29
5
0
04 Dec 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
36
0
0
20 Nov 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
39
4
0
21 Sep 2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability
  and Inherent Interpretability
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Usha Bhalla
Suraj Srinivas
Himabindu Lakkaraju
FAtt
CML
39
6
0
27 Jul 2023
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
  Study
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study
L. Herm
26
22
0
18 Apr 2023
A Systematic Literature Review of User Trust in AI-Enabled Systems: An
  HCI Perspective
A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective
T. A. Bach
Amna Khan
Harry P. Hallock
Gabriel Beltrao
Sonia C. Sousa
27
100
0
18 Apr 2023
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
34
18
0
16 Dec 2022
Learning to Select Prototypical Parts for Interpretable Sequential Data
  Modeling
Learning to Select Prototypical Parts for Interpretable Sequential Data Modeling
Yifei Zhang
Nengneng Gao
Cunqing Ma
25
6
0
07 Dec 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
38
5
0
29 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
37
4
0
09 Nov 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
52
39
0
24 Oct 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour
  or Boosting Capability?
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
23
1
0
05 Oct 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
55
46
0
23 Sep 2022
Inferring Sensitive Attributes from Model Explanations
Inferring Sensitive Attributes from Model Explanations
Vasisht Duddu
A. Boutet
MIACV
SILM
24
16
0
21 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
43
12
0
19 Aug 2022
Algorithmic Assistance with Recommendation-Dependent Preferences
Algorithmic Assistance with Recommendation-Dependent Preferences
Bryce Mclaughlin
Jann Spiess
47
9
0
16 Aug 2022
On Computing Relevant Features for Explaining NBCs
On Computing Relevant Features for Explaining NBCs
Yacine Izza
Sasha Rubin
38
5
0
11 Jul 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
45
9
0
06 Jun 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
53
57
0
15 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
45
78
0
06 May 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
194
186
0
03 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
30
1
0
30 Jan 2022
Global explainability in aligned image modalities
Global explainability in aligned image modalities
Justin Engelmann
Amos Storkey
Miguel O. Bernabeu
FAtt
35
4
0
17 Dec 2021
On the Fairness of Machine-Assisted Human Decisions
On the Fairness of Machine-Assisted Human Decisions
Talia B. Gillis
Bryce Mclaughlin
Jann Spiess
FaML
29
16
0
28 Oct 2021
Unpacking the Black Box: Regulating Algorithmic Decisions
Unpacking the Black Box: Regulating Algorithmic Decisions
Laura Blattner
Scott Nelson
Jann Spiess
MLAU
FaML
28
19
0
05 Oct 2021
Toward a Unified Framework for Debugging Concept-based Models
Toward a Unified Framework for Debugging Concept-based Models
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
20
4
0
23 Sep 2021
InfoGram and Admissible Machine Learning
InfoGram and Admissible Machine Learning
S. Mukhopadhyay
FaML
24
8
0
17 Aug 2021
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
  Neural Networks
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Lucie Charlotte Magister
Dmitry Kazhdan
Vikash Singh
Pietro Lio
35
48
0
25 Jul 2021
On the Veracity of Local, Model-agnostic Explanations in Audio
  Classification: Targeted Investigations with Adversarial Examples
On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples
Verena Praher
Katharina Prinz
A. Flexer
Gerhard Widmer
AAML
FAtt
19
9
0
19 Jul 2021
What will it take to generate fairness-preserving explanations?
What will it take to generate fairness-preserving explanations?
Jessica Dai
Sohini Upadhyay
Stephen H. Bach
Himabindu Lakkaraju
FAtt
FaML
21
14
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
Willie Neiswanger
37
65
0
23 Jun 2021
Characterizing the risk of fairwashing
Characterizing the risk of fairwashing
Ulrich Aïvodji
Hiromi Arai
Sébastien Gambs
Satoshi Hara
23
27
0
14 Jun 2021
On Efficiently Explaining Graph-Based Classifiers
On Efficiently Explaining Graph-Based Classifiers
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
38
37
0
02 Jun 2021
Information-theoretic Evolution of Model Agnostic Global Explanations
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
29
0
0
14 May 2021
SAT-Based Rigorous Explanations for Decision Lists
SAT-Based Rigorous Explanations for Decision Lists
Alexey Ignatiev
Sasha Rubin
XAI
29
44
0
14 May 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and
  Practice
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
22
63
0
27 Mar 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
59
655
0
20 Mar 2021
Detecting Spurious Correlations with Sanity Tests for Artificial
  Intelligence Guided Radiology Systems
Detecting Spurious Correlations with Sanity Tests for Artificial Intelligence Guided Radiology Systems
U. Mahmood
Robik Shrestha
D. Bates
L. Mannelli
G. Corrias
Y. Erdi
Christopher Kanan
18
16
0
04 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to
  Rectify in the Evaluation of Counterfactual XAI Techniques
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
29
146
0
26 Feb 2021
Connecting Interpretability and Robustness in Decision Trees through
  Separation
Connecting Interpretability and Robustness in Decision Trees through Separation
Michal Moshkovitz
Yao-Yuan Yang
Kamalika Chaudhuri
33
22
0
14 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
34
18
0
10 Dec 2020
Robust and Stable Black Box Explanations
Robust and Stable Black Box Explanations
Himabindu Lakkaraju
Nino Arsov
Osbert Bastani
AAML
FAtt
24
84
0
12 Nov 2020
12
Next