ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18673
28
0

Can Third-parties Read Our Emotions?

25 April 2025
Jiayi Li
Yingfan Zhou
Pranav Narayanan Venkit
Halima Binte Islam
Sneha Arya
Shomir Wilson
Sarah Rajtmajer
ArXivPDFHTML
Abstract

Natural Language Processing tasks that aim to infer an author's private states, e.g., emotions and opinions, from their written text, typically rely on datasets annotated by third-party annotators. However, the assumption that third-party annotators can accurately capture authors' private states remains largely unexamined. In this study, we present human subjects experiments on emotion recognition tasks that directly compare third-party annotations with first-party (author-provided) emotion labels. Our findings reveal significant limitations in third-party annotations-whether provided by human annotators or large language models (LLMs)-in faithfully representing authors' private states. However, LLMs outperform human annotators nearly across the board. We further explore methods to improve third-party annotation quality. We find that demographic similarity between first-party authors and third-party human annotators enhances annotation performance. While incorporating first-party demographic information into prompts leads to a marginal but statistically significant improvement in LLMs' performance. We introduce a framework for evaluating the limitations of third-party annotations and call for refined annotation practices to accurately represent and model authors' private states.

View on arXiv
@article{li2025_2504.18673,
  title={ Can Third-parties Read Our Emotions? },
  author={ Jiayi Li and Yingfan Zhou and Pranav Narayanan Venkit and Halima Binte Islam and Sneha Arya and Shomir Wilson and Sarah Rajtmajer },
  journal={arXiv preprint arXiv:2504.18673},
  year={ 2025 }
}
Comments on this paper