Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.02255
Cited By
In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers
4 December 2021
V. Pradhan
M. Schaekermann
Matthew Lease
Re-assign community
ArXiv
PDF
HTML
Papers citing
"In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers"
8 / 8 papers shown
Title
A Culturally-Aware Tool for Crowdworkers: Leveraging Chronemics to Support Diverse Work Styles
Carlos Toxtli
Christopher Curtis
Saiph Savage
23
0
0
31 Jul 2024
ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models
Aparna Elangovan
Ling Liu
Lei Xu
S. Bodapati
Dan Roth
ELM
19
9
0
28 May 2024
Case Law Grounding: Aligning Judgments of Humans and AI on Socially-Constructed Concepts
Quan Ze Chen
Amy X. Zhang
ELM
56
6
0
10 Oct 2023
A Large Language Model Approach to Educational Survey Feedback Analysis
Michael J. Parker
Caitlin Anderson
Claire Stone
YeaRim Oh
ELM
LM&MA
AI4Ed
14
10
0
29 Sep 2023
Evaluating AI systems under uncertain ground truth: a case study in dermatology
David Stutz
A. Cemgil
Abhijit Guha Roy
Tatiana Matejovicova
Melih Barsbey
...
Yossi Matias
Pushmeet Kohli
Yun-hui Liu
Arnaud Doucet
Alan Karthikesalingam
23
4
0
05 Jul 2023
Judgment Sieve: Reducing Uncertainty in Group Judgments through Interventions Targeting Ambiguity versus Disagreement
Quan Ze Chen
Amy X. Zhang
19
7
0
02 May 2023
Discovering and Validating AI Errors With Crowdsourced Failure Reports
Ángel Alexander Cabrera
Abraham J. Druck
Jason I. Hong
Adam Perer
HAI
37
53
0
23 Sep 2021
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
235
319
0
21 Aug 2019
1