ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08558
  4. Cited By
Quantifying Interpretability and Trust in Machine Learning Systems

Quantifying Interpretability and Trust in Machine Learning Systems

20 January 2019
Philipp Schmidt
F. Biessmann
ArXivPDFHTML

Papers citing "Quantifying Interpretability and Trust in Machine Learning Systems"

20 / 20 papers shown
Title
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
42
10
0
27 Jul 2024
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale
  Annotations
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations
Stephanie Brandl
Oliver Eberle
Tiago F. R. Ribeiro
Anders Søgaard
Nora Hollenstein
40
1
0
29 Feb 2024
Trust, distrust, and appropriate reliance in (X)AI: a survey of
  empirical evaluation of user trust
Trust, distrust, and appropriate reliance in (X)AI: a survey of empirical evaluation of user trust
Roel W. Visser
Tobias M. Peters
Ingrid Scharlau
Barbara Hammer
26
5
0
04 Dec 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
37
4
0
21 Sep 2023
RecRec: Algorithmic Recourse for Recommender Systems
RecRec: Algorithmic Recourse for Recommender Systems
Sahil Verma
Ashudeep Singh
Varich Boonsanong
John P. Dickerson
Chirag Shah
33
1
0
28 Aug 2023
Evaluating self-attention interpretability through human-grounded
  experimental protocol
Evaluating self-attention interpretability through human-grounded experimental protocol
Milan Bhan
Nina Achache
Victor Legrand
A. Blangero
Nicolas Chesneau
26
9
0
27 Mar 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
34
1
0
17 Feb 2023
Are we measuring trust correctly in explainability, interpretability,
  and transparency research?
Are we measuring trust correctly in explainability, interpretability, and transparency research?
Tim Miller
17
23
0
31 Aug 2022
Interpretation Quality Score for Measuring the Quality of
  interpretability methods
Interpretation Quality Score for Measuring the Quality of interpretability methods
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
XAI
13
5
0
24 May 2022
Single-Turn Debate Does Not Help Humans Answer Hard
  Reading-Comprehension Questions
Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions
Alicia Parrish
H. Trivedi
Ethan Perez
Angelica Chen
Nikita Nangia
Jason Phang
Sam Bowman
22
14
0
11 Apr 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
28
1
0
07 Mar 2022
Towards a Responsible AI Development Lifecycle: Lessons From Information
  Security
Towards a Responsible AI Development Lifecycle: Lessons From Information Security
Erick Galinkin
SILM
19
6
0
06 Mar 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
24
22
0
22 Feb 2022
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Intelligent Decision Assistance Versus Automated Decision-Making:
  Enhancing Knowledge Work Through Explainable Artificial Intelligence
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence
Max Schemmer
Niklas Kühl
G. Satzger
16
14
0
28 Sep 2021
Spoofing Generalization: When Can't You Trust Proprietary Models?
Spoofing Generalization: When Can't You Trust Proprietary Models?
Ankur Moitra
Elchanan Mossel
Colin Sandon
FedML
11
2
0
15 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
11
21
0
08 Jun 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
120
0
21 Jan 2021
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
42
7
0
23 Oct 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
581
0
26 Jun 2020
1