ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.09669
  4. Cited By
Explain, Edit, and Understand: Rethinking User Study Design for
  Evaluating Model Explanations

Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations

17 December 2021
Siddhant Arora
Danish Pruthi
Norman M. Sadeh
William W. Cohen
Zachary Chase Lipton
Graham Neubig
    FAtt
ArXivPDFHTML

Papers citing "Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations"

22 / 22 papers shown
Title
Comparative Explanations: Explanation Guided Decision Making for Human-in-the-Loop Preference Selection
Comparative Explanations: Explanation Guided Decision Making for Human-in-the-Loop Preference Selection
Tanmay Chakraborty
Christian Wirth
Christin Seifert
26
0
0
01 Apr 2025
Utilizing Human Behavior Modeling to Manipulate Explanations in
  AI-Assisted Decision Making: The Good, the Bad, and the Scary
Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary
Zhuoyan Li
Ming Yin
25
3
0
02 Nov 2024
Cross-Refine: Improving Natural Language Explanation Generation by
  Learning in Tandem
Cross-Refine: Improving Natural Language Explanation Generation by Learning in Tandem
Qianli Wang
Tatiana Anikina
Nils Feldhus
Simon Ostermann
Sebastian Möller
Vera Schmitt
LRM
34
0
0
11 Sep 2024
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Thiago Freitas dos Santos
Nardine Osman
Marco Schorlemmer
19
0
0
01 Mar 2024
I-CEE: Tailoring Explanations of Image Classification Models to User
  Expertise
I-CEE: Tailoring Explanations of Image Classification Models to User Expertise
Yao Rong
Peizhu Qian
Vaibhav Unhelkar
Enkelejda Kasneci
30
0
0
19 Dec 2023
Representing visual classification as a linear combination of words
Representing visual classification as a linear combination of words
Shobhit Agarwal
Yevgeniy R. Semenov
William Lotter
25
1
0
18 Nov 2023
Be Careful When Evaluating Explanations Regarding Ground Truth
Be Careful When Evaluating Explanations Regarding Ground Truth
Hubert Baniecki
Maciej Chrabaszcz
Andreas Holzinger
Bastian Pfeifer
Anna Saranti
P. Biecek
FAtt
AAML
40
3
0
08 Nov 2023
InterroLang: Exploring NLP Models and Datasets through Dialogue-based
  Explanations
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
Nils Feldhus
Qianli Wang
Tatiana Anikina
Sahil Chopra
Cennet Oguz
Sebastian Möller
26
9
0
09 Oct 2023
Explaining Speech Classification Models via Word-Level Audio Segments
  and Paralinguistic Features
Explaining Speech Classification Models via Word-Level Audio Segments and Paralinguistic Features
Eliana Pastor
Alkis Koudounas
Giuseppe Attanasio
Dirk Hovy
Elena Baralis
11
4
0
14 Sep 2023
Quantifying the Intrinsic Usefulness of Attributional Explanations for
  Graph Neural Networks with Artificial Simulatability Studies
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel
Luca Torresi
Pascal Friederich
FAtt
19
1
0
25 May 2023
Neighboring Words Affect Human Interpretation of Saliency Explanations
Neighboring Words Affect Human Interpretation of Saliency Explanations
Tim Dockhorn
Yaoliang Yu
Heike Adel
Mahdi Zolnouri
V. Nia
FAtt
MILM
28
3
0
04 May 2023
Assisting Human Decisions in Document Matching
Assisting Human Decisions in Document Matching
Joon Sik Kim
Valerie Chen
Danish Pruthi
Nihar B. Shah
Ameet Talwalkar
33
5
0
16 Feb 2023
Silent Vulnerable Dependency Alert Prediction with Vulnerability Key
  Aspect Explanation
Silent Vulnerable Dependency Alert Prediction with Vulnerability Key Aspect Explanation
Jiamou Sun
Zhenchang Xing
Qinghua Lu
Xiwei Xu
Liming Zhu
Thong Hoang
Dehai Zhao
17
12
0
15 Feb 2023
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
25
91
0
20 Oct 2022
Saliency Map Verbalization: Comparing Feature Importance Representations
  from Model-free and Instruction-based Methods
Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
Nils Feldhus
Leonhard Hennig
Maximilian Dustin Nasert
Christopher Ebert
Robert Schwarzenberg
Sebastian Möller
FAtt
19
19
0
13 Oct 2022
Beware the Rationalization Trap! When Language Model Explainability
  Diverges from our Mental Models of Language
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language
R. Sevastjanova
Mennatallah El-Assady
LRM
27
9
0
14 Jul 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
25
16
0
13 Jun 2022
Learning to Scaffold: Optimizing Model Explanations for Teaching
Learning to Scaffold: Optimizing Model Explanations for Teaching
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
FAtt
17
22
0
22 Apr 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
20
1
0
07 Mar 2022
Human Interpretation of Saliency-based Explanation Over Text
Human Interpretation of Saliency-based Explanation Over Text
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
MILM
XAI
FAtt
144
39
0
27 Jan 2022
Invariant Rationalization
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
179
201
0
22 Mar 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1