Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2008.11721
Cited By
v1
v2 (latest)
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
AAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2020
26 August 2020
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels"
40 / 40 papers shown
FACE: Faithful Automatic Concept Extraction
Dipkamal Bhusal
Michael Clifford
Sara Rampazzi
Nidhi Rastogi
CVBM
181
3
0
13 Oct 2025
Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Zhanna Kaufman
Madeline Endres
Cindy Xiong Bearfield
Yuriy Brun
225
2
0
31 Jul 2025
On the Performance of Concept Probing: The Influence of the Data (Extended Version)
Manuel de Sousa Ribeiro
Afonso Leote
João Leite
272
1
0
24 Jul 2025
Concept Probing: Where to Find Human-Defined Concepts (Extended Version)
Manuel de Sousa Ribeiro
Afonso Leote
João Leite
289
1
0
24 Jul 2025
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), 2025
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
351
0
0
09 May 2025
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are Absent
International Conference on Human Factors in Computing Systems (CHI), 2025
Zeyu He
Saniya Naphade
Ting-Hao 'Kenneth' Huang
358
2
0
16 Feb 2025
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Maxime Kayser
Bayar I. Menzat
Cornelius Emde
Bogdan Bercean
Alex Novak
Abdala Espinosa
B. Papież
Susanne Gaube
Thomas Lukasiewicz
Oana-Maria Camburu
471
11
0
16 Oct 2024
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
Hua Shen
Tiffany Knearem
Reshmi Ghosh
Yu-Ju Yang
Nicholas Clark
Tanushree Mitra
Yun Huang
389
0
0
15 Sep 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
328
1
0
17 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
374
2
0
11 Jun 2024
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
International journal of human computer interactions (IJHCI), 2024
Romy Müller
HAI
283
25
0
03 Apr 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
441
5
0
15 Feb 2024
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
International journal of human computer interactions (IJHCI), 2023
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
295
7
0
21 Nov 2023
Representing visual classification as a linear combination of words
Shobhit Agarwal
Yevgeniy R. Semenov
William Lotter
322
2
0
18 Nov 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
IEEE International Conference on Computer Vision (ICCV), 2023
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
295
50
0
11 Aug 2023
FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis
Conference on Computer and Communications Security (CCS), 2023
Yiling He
Jian Lou
Zhan Qin
Kui Ren
FAtt
AAML
238
16
0
10 Aug 2023
Interpreting and Correcting Medical Image Classification with PIP-Net
Meike Nauta
J. H. Hegeman
J. Geerdink
Jorg Schlotterer
M. V. Keulen
Christin Seifert
MedIm
272
15
0
19 Jul 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Neural Information Processing Systems (NeurIPS), 2023
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
429
92
0
11 Jun 2023
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing
Hua Shen
Huang Chieh-Yang
Tongshuang Wu
Ting-Hao 'Kenneth' Huang
548
47
0
16 May 2023
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Annual Meeting of the Association for Computational Linguistics (ACL), 2023
Fanny Jourdan
Agustin Picard
Thomas Fel
Laurent Risser
Jean-Michel Loubes
Nicholas M. Asher
270
17
0
11 May 2023
From Explanation to Action: An End-to-End Human-in-the-loop Framework for Anomaly Reasoning and Management
Xueying Ding
Nikita Seleznev
Senthil Kumar
C. Bayan Bruss
Leman Akoglu
295
6
0
06 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
475
10
0
30 Mar 2023
On Modifying a Neural Network's Perception
Manuel de Sousa Ribeiro
João Leite
AAML
169
1
0
05 Mar 2023
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
316
9
0
10 Feb 2023
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
International Conference on Learning Representations (ICLR), 2022
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
277
100
0
09 Dec 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Computer Vision and Pattern Recognition (CVPR), 2022
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
399
187
0
17 Nov 2022
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
396
196
0
20 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
International Conference on Human Factors in Computing Systems (CHI), 2022
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
400
162
0
02 Oct 2022
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
213
2
0
07 Sep 2022
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Neural Information Processing Systems (NeurIPS), 2022
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
702
48
0
26 Jul 2022
Human-Centric Research for NLP: Towards a Definition and Guiding Questions
Bhushan Kotnis
Kiril Gashteovski
J. Gastinger
G. Serra
Francesco Alesiani
T. Sztyler
Ammar Shaker
Na Gong
Carolin (Haas) Lawrence
Zhao Xu
180
11
0
10 Jul 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
International Conference on Learning Representations (ICLR), 2022
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
229
21
0
25 Apr 2022
Are Shortest Rationales the Best Explanations for Human Understanding?
Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Hua Shen
Tongshuang Wu
Wenbo Guo
Ting-Hao 'Kenneth' Huang
FAtt
231
11
0
16 Mar 2022
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Neural Information Processing Systems (NeurIPS), 2021
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
377
129
0
06 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
European Conference on Computer Vision (ECCV), 2021
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
490
134
0
06 Dec 2021
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Neural Information Processing Systems (NeurIPS), 2021
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
686
110
0
31 May 2021
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled Experiments
M. Schuessler
Philipp Weiß
Leon Sixt
168
3
0
06 May 2021
Explaining the Road Not Taken
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
XAI
268
9
0
27 Mar 2021
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
444
201
0
10 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
310
7
0
23 Oct 2020
1
Page 1 of 1