ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13776
20
0

Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors

17 October 2024
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
ArXivPDFHTML
Abstract

In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs). The knowledge acquired during pre-training is crucial for this few-shot capability, providing the model with task priors. However, recent studies have shown that ICL predominantly relies on retrieving task priors rather than "learning" to perform tasks. This limitation is particularly evident in complex subjective domains such as emotion and morality, where priors significantly influence posterior predictions. In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt. Moreover, we evaluate the posterior bias towards certain annotators by grounding our study in appropriate, quantitative measures of LLM priors. Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead. However, aggregation does not explain the entire gap between ICL and the state of the art, meaning other factors in such tasks also account for the observed phenomena. Finally, by rigorously studying annotator-level labels, we find that it is possible for minority annotators to both better align with LLMs and have their perspectives further amplified.

View on arXiv
@article{chochlakis2025_2410.13776,
  title={ Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors },
  author={ Georgios Chochlakis and Alexandros Potamianos and Kristina Lerman and Shrikanth Narayanan },
  journal={arXiv preprint arXiv:2410.13776},
  year={ 2025 }
}
Comments on this paper