ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.09219
  4. Cited By
Explainable Active Learning (XAL): An Empirical Study of How Local
  Explanations Impact Annotator Experience
v1v2v3v4 (latest)

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

24 January 2020
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Klaus Mueller
ArXiv (abs)PDFHTML

Papers citing "Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience"

19 / 19 papers shown
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to
  Support Human-AI Scientific Writing
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing
Hua Shen
Huang Chieh-Yang
Tongshuang Wu
Ting-Hao 'Kenneth' Huang
577
48
0
16 May 2023
Fine-tuning of explainable CNNs for skin lesion classification based on
  dermatologists' feedback towards increasing trust
Fine-tuning of explainable CNNs for skin lesion classification based on dermatologists' feedback towards increasing trust
Md Abdul Kadir
Fabrizio Nunnari
Daniel Sonntag
FAtt
325
2
0
03 Apr 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
419
70
0
23 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
665
173
0
18 Jan 2023
A Human-ML Collaboration Framework for Improving Video Content Reviews
A Human-ML Collaboration Framework for Improving Video Content Reviews
Meghana Deodhar
Xiao Ma
Yixin Cai
Alex Koes
Alex Beutel
Jilin Chen
255
3
0
18 Oct 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
293
17
0
13 Jun 2022
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model UpdatesPatterns (Patterns), 2022
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
295
15
0
13 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content ModerationInternational Conference on Human Factors in Computing Systems (CHI), 2022
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
388
175
0
25 Apr 2022
User Driven Model Adjustment via Boolean Rule Explanations
User Driven Model Adjustment via Boolean Rule ExplanationsAAAI Conference on Artificial Intelligence (AAAI), 2021
Elizabeth M. Daly
Massimiliano Mattetti
Öznur Alkan
Rahul Nair
AAML
193
13
0
28 Mar 2022
Towards a Science of Human-AI Decision Making: A Survey of Empirical
  Studies
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
355
229
0
21 Dec 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
697
308
0
20 Oct 2021
A Survey on Cost Types, Interaction Schemes, and Annotator Performance
  Models in Selection Algorithms for Active Learning in Classification
A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in ClassificationIEEE Access (IEEE Access), 2021
M. Herde
Denis Huseljic
Bernhard Sick
A. Calma
241
31
0
23 Sep 2021
Class Introspection: A Novel Technique for Detecting Unlabeled
  Subclasses by Leveraging Classifier Explainability Methods
Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods
Patrick Kage
Pavlos Andreadis
315
1
0
04 Jul 2021
Facilitating Knowledge Sharing from Domain Experts to Data Scientists
  for Building NLP Models
Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP ModelsInternational Conference on Intelligent User Interfaces (IUI), 2021
Soya Park
A. Wang
B. Kawas
Q. V. Liao
David Piorkowski
Marina Danilevsky
321
61
0
29 Jan 2021
Show or Suppress? Managing Input Uncertainty in Machine Learning Model
  Explanations
Show or Suppress? Managing Input Uncertainty in Machine Learning Model ExplanationsArtificial Intelligence (AI), 2021
Danding Wang
Wencan Zhang
Brian Y. Lim
FAtt
157
28
0
23 Jan 2021
Understanding the Effect of Out-of-distribution Examples and Interactive
  Explanations on Human-AI Decision Making
Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making
Han Liu
Vivian Lai
Chenhao Tan
518
142
0
13 Jan 2021
Active Learning++: Incorporating Annotator's Rationale using Local Model
  Explanation
Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Klaus Mueller
FAtt
146
2
0
06 Sep 2020
ALEX: Active Learning based Enhancement of a Model's Explainability
ALEX: Active Learning based Enhancement of a Model's ExplainabilityInternational Conference on Information and Knowledge Management (CIKM), 2020
Ishani Mondal
Debasis Ganguly
179
2
0
02 Sep 2020
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
  Learning Reduces User Trust and Impressions of Model Accuracy
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model AccuracyAAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2020
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
335
79
0
28 Aug 2020
1
Page 1 of 1