Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.00456
Cited By
Crowdsourcing Evaluation of Saliency-based XAI Methods
27 June 2021
Xiaotian Lu
A. Tolmachev
Tatsuya Yamamoto
Koh Takeuchi
Seiji Okajima
T. Takebayashi
Koji Maruhashi
H. Kashima
XAI
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Crowdsourcing Evaluation of Saliency-based XAI Methods"
10 / 10 papers shown
Title
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
42
0
0
31 May 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
27
1
0
17 May 2024
Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification
Matteo Bianchi
Antonio De Santis
Andrea Tocchetti
Marco Brambilla
MILM
FAtt
32
1
0
06 May 2024
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
45
6
0
03 Apr 2024
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
24
3
0
21 Nov 2023
Multiview Representation Learning from Crowdsourced Triplet Comparisons
Xiaotian Lu
Jiyi Li
Koh Takeuchi
H. Kashima
SSL
17
2
0
08 Feb 2023
Trustworthy Human Computation: A Survey
H. Kashima
S. Oyama
Hiromi Arai
Junichiro Mori
27
0
0
22 Oct 2022
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILM
AAML
36
12
0
05 Jul 2021
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1