ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07810
  4. Cited By
Manipulating and Measuring Model Interpretability

Manipulating and Measuring Model Interpretability

21 February 2018
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
ArXivPDFHTML

Papers citing "Manipulating and Measuring Model Interpretability"

50 / 61 papers shown
Title
Citations and Trust in LLM Generated Responses
Yifan Ding
Matthew Facciani
Amrit Poudel
Ellen Joyce
Salvador Aguiñaga
Balaji Veeramani
Sanmitra Bhattacharya
Tim Weninger
HILM
41
3
0
03 Jan 2025
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of
  Large Language Models
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Upol Ehsan
Mark O. Riedl
23
2
0
09 Aug 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
36
2
0
11 Jun 2024
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
Shuai Ma
Qiaoyi Chen
Xinru Wang
Chengbo Zheng
Zhenhui Peng
Ming Yin
Xiaojuan Ma
ELM
26
20
0
25 Mar 2024
On the Challenges and Opportunities in Generative AI
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
56
17
0
28 Feb 2024
Succinct Interaction-Aware Explanations
Succinct Interaction-Aware Explanations
Sascha Xu
Joscha Cuppers
Jilles Vreeken
FAtt
11
0
0
08 Feb 2024
On Prediction-Modelers and Decision-Makers: Why Fairness Requires More
  Than a Fair Prediction Model
On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model
Teresa Scantamburlo
Joachim Baumann
Christoph Heitz
FaML
15
3
0
09 Oct 2023
Automatic Concept Embedding Model (ACEM): No train-time concepts, No
  issue!
Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue!
Rishabh Jain
LRM
24
0
0
07 Sep 2023
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and
  Perceived Bias in Machine Learning
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
Aimen Gaba
Zhanna Kaufman
Jason Chueng
Marie Shvakel
Kyle Wm. Hall
Yuriy Brun
Cindy Xiong Bearfield
25
14
0
07 Aug 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
22
12
0
26 Jul 2023
Towards Evaluating Explanations of Vision Transformers for Medical
  Imaging
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
Piotr Komorowski
Hubert Baniecki
P. Biecek
MedIm
31
27
0
12 Apr 2023
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
13
56
0
10 Apr 2023
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Nicolas Scharowski
S. Perrig
HILM
22
3
0
29 Mar 2023
How Accurate Does It Feel? -- Human Perception of Different Types of
  Classification Mistakes
How Accurate Does It Feel? -- Human Perception of Different Types of Classification Mistakes
A. Papenmeier
Dagmar Kern
Daniel Hienert
Yvonne Kammerer
C. Seifert
21
18
0
13 Feb 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
14
96
0
04 Feb 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
36
103
0
18 Jan 2023
Improving Human-AI Collaboration With Descriptions of AI Behavior
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
18
34
0
06 Jan 2023
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
24
14
0
13 Dec 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
18
6
0
19 Nov 2022
An Interpretable Hybrid Predictive Model of COVID-19 Cases using
  Autoregressive Model and LSTM
An Interpretable Hybrid Predictive Model of COVID-19 Cases using Autoregressive Model and LSTM
Yangyi Zhang
Sui Tang
Guo-Ding Yu
14
11
0
14 Nov 2022
Learning When to Advise Human Decision Makers
Learning When to Advise Human Decision Makers
Gali Noti
Yiling Chen
39
15
0
27 Sep 2022
Advancing Human-AI Complementarity: The Impact of User Expertise and
  Algorithmic Tuning on Joint Decision Making
Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making
K. Inkpen
Shreya Chappidi
Keri Mallari
Besmira Nushi
Divya Ramesh
Pietro Michelucci
Vani Mandava
Libuvse Hannah Vepvrek
Gabrielle Quinn
23
45
0
16 Aug 2022
"Is It My Turn?" Assessing Teamwork and Taskwork in Collaborative
  Immersive Analytics
"Is It My Turn?" Assessing Teamwork and Taskwork in Collaborative Immersive Analytics
Michaela Benk
Raphael P. Weibel
Stefan Feuerriegel
Andrea Ferrario
15
3
0
09 Aug 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
34
9
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
20
28
0
05 Jun 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
25
77
0
06 May 2022
Interactive Model Cards: A Human-Centered Approach to Model
  Documentation
Interactive Model Cards: A Human-Centered Approach to Model Documentation
Anamaria Crisan
Margaret Drouhard
Jesse Vig
Nazneen Rajani
HAI
23
86
0
05 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
18
129
0
25 Apr 2022
Should I Follow AI-based Advice? Measuring Appropriate Reliance in
  Human-AI Decision-Making
Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Niklas Kühl
Carina Benz
G. Satzger
12
54
0
14 Apr 2022
Trust in AI: Interpretability is not necessary or sufficient, while
  black-box interaction is necessary and sufficient
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
Max W. Shen
22
18
0
10 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
172
185
0
03 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
15
1
0
30 Jan 2022
Towards Relatable Explainable AI with the Perceptual Process
Towards Relatable Explainable AI with the Perceptual Process
Wencan Zhang
Brian Y. Lim
AAML
XAI
20
61
0
28 Dec 2021
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations
Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
FAtt
24
38
0
17 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
114
0
06 Dec 2021
Learning Optimal Predictive Checklists
Learning Optimal Predictive Checklists
Haoran Zhang
Q. Morris
Berk Ustun
Marzyeh Ghassemi
12
11
0
02 Dec 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
117
355
0
04 Oct 2021
An Objective Metric for Explainable AI: How and Why to Estimate the
  Degree of Explainability
An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability
Francesco Sovrano
F. Vitali
29
30
0
11 Sep 2021
The Impact of Algorithmic Risk Assessments on Human Predictions and its
  Analysis via Crowdsourcing Studies
The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies
Riccardo Fogliato
Alexandra Chouldechova
Zachary Chase Lipton
24
31
0
03 Sep 2021
Contemporary Symbolic Regression Methods and their Relative Performance
Contemporary Symbolic Regression Methods and their Relative Performance
William La Cava
Patryk Orzechowski
Bogdan Burlacu
Fabrício Olivetti de Francca
M. Virgolin
Ying Jin
M. Kommenda
J. Moore
19
247
0
29 Jul 2021
Productivity, Portability, Performance: Data-Centric Python
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
43
94
0
01 Jul 2021
How Well do Feature Visualizations Support Causal Understanding of CNN
  Activations?
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
FAtt
31
31
0
23 Jun 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
25
79
0
30 Apr 2021
Extractive and Abstractive Explanations for Fact-Checking and Evaluation
  of News
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News
Ashkan Kazemi
Zehua Li
Verónica Pérez-Rosas
Rada Mihalcea
19
14
0
27 Apr 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
43
650
0
20 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
212
0
09 Mar 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
32
25
0
17 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
10
397
0
19 Oct 2020
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
  Decision-making
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
Charvi Rastogi
Yunfeng Zhang
Dennis L. Wei
Kush R. Varshney
Amit Dhurandhar
Richard J. Tomsett
HAI
27
108
0
15 Oct 2020
12
Next