ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.00456
  4. Cited By
Crowdsourcing Evaluation of Saliency-based XAI Methods

Crowdsourcing Evaluation of Saliency-based XAI Methods

27 June 2021
Xiaotian Lu
A. Tolmachev
Tatsuya Yamamoto
Koh Takeuchi
Seiji Okajima
T. Takebayashi
Koji Maruhashi
H. Kashima
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Crowdsourcing Evaluation of Saliency-based XAI Methods"

10 / 10 papers shown
Title
A Sim2Real Approach for Identifying Task-Relevant Properties in
  Interpretable Machine Learning
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
42
0
0
31 May 2024
Evaluating Saliency Explanations in NLP by Crowdsourcing
Evaluating Saliency Explanations in NLP by Crowdsourcing
Xiaotian Lu
Jiyi Li
Zhen Wan
Xiaofeng Lin
Koh Takeuchi
Hisashi Kashima
XAI
FAtt
LRM
27
1
0
17 May 2024
Interpretable Network Visualizations: A Human-in-the-Loop Approach for
  Post-hoc Explainability of CNN-based Image Classification
Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification
Matteo Bianchi
Antonio De Santis
Andrea Tocchetti
Marco Brambilla
MILM
FAtt
32
1
0
06 May 2024
How explainable AI affects human performance: A systematic review of the
  behavioural consequences of saliency maps
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
45
6
0
03 Apr 2024
Interpretability is in the eye of the beholder: Human versus artificial
  classification of image segments generated by humans versus XAI
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAI
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
24
3
0
21 Nov 2023
Multiview Representation Learning from Crowdsourced Triplet Comparisons
Multiview Representation Learning from Crowdsourced Triplet Comparisons
Xiaotian Lu
Jiyi Li
Koh Takeuchi
H. Kashima
SSL
15
2
0
08 Feb 2023
Trustworthy Human Computation: A Survey
Trustworthy Human Computation: A Survey
H. Kashima
S. Oyama
Hiromi Arai
Junichiro Mori
27
0
0
22 Oct 2022
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILM
AAML
36
12
0
05 Jul 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
1