39
1

Large Language Models Are Effective Human Annotation Assistants, But Not Good Independent Annotators

Abstract

Event annotation is important for identifying market changes, monitoring breaking news, and understanding sociological trends. Although expert annotators set the gold standards, human coding is expensive and inefficient. Unlike information extraction experiments that focus on single contexts, we evaluate a holistic workflow that removes irrelevant documents, merges documents about the same event, and annotates the events. Although LLM-based automated annotations are better than traditional TF-IDF-based methods or Event Set Curation, they are still not reliable annotators compared to human experts. However, adding LLMs to assist experts for Event Set Curation can reduce the time and mental effort required for Variable Annotation. When using LLMs to extract event variables to assist expert annotators, they agree more with the extracted variables than fully automated LLMs for annotation.

View on arXiv
@article{gu2025_2503.06778,
  title={ Large Language Models Are Effective Human Annotation Assistants, But Not Good Independent Annotators },
  author={ Feng Gu and Zongxia Li and Carlos Rafael Colon and Benjamin Evans and Ishani Mondal and Jordan Lee Boyd-Graber },
  journal={arXiv preprint arXiv:2503.06778},
  year={ 2025 }
}
Comments on this paper