ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.10397
6
10

Needle in a Haystack: An Analysis of High-Agreement Workers on MTurk for Summarization

20 December 2022
Lining Zhang
Simon Mille
Yufang Hou
Daniel Deutsch
Elizabeth Clark
Yixin Liu
Saad Mahamood
Sebastian Gehrmann
Miruna Clinciu
Khyathi Raghavi Chandu
João Sedoc
ArXivPDFHTML
Abstract

To prevent the costly and inefficient use of resources on low-quality annotations, we want a method for creating a pool of dependable annotators who can effectively complete difficult tasks, such as evaluating automatic summarization. Thus, we investigate the recruitment of high-quality Amazon Mechanical Turk workers via a two-step pipeline. We show that we can successfully filter out subpar workers before they carry out the evaluations and obtain high-agreement annotations with similar constraints on resources. Although our workers demonstrate a strong consensus among themselves and CloudResearch workers, their alignment with expert judgments on a subset of the data is not as expected and needs further training in correctness. This paper still serves as a best practice for the recruitment of qualified annotators in other challenging annotation tasks.

View on arXiv
Comments on this paper