ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16883
37
0

Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings

21 March 2025
Deniss Ruder
Andero Uusberg
Kairit Sirts
ArXivPDFHTML
Abstract

Appraisal theories suggest that emotions arise from subjective evaluations of events, referred to as appraisals. The taxonomy of appraisals is quite diverse, and they are usually given ratings on a Likert scale to be annotated in an experiencer-annotator or reader-annotator paradigm. This paper studies GPT-4 as a reader-annotator of 21 specific appraisal ratings in different prompt settings, aiming to evaluate and improve its performance compared to human annotators. We found that GPT-4 is an effective reader-annotator that performs close to or even slightly better than human annotators, and its results can be significantly improved by using a majority voting of five completions. GPT-4 also effectively predicts appraisal ratings and emotion labels using a single prompt, but adding instruction complexity results in poorer performance. We also found that longer event descriptions lead to more accurate annotations for both model and human annotator ratings. This work contributes to the growing usage of LLMs in psychology and the strategies for improving GPT-4 performance in annotating appraisals.

View on arXiv
@article{ruder2025_2503.16883,
  title={ Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings },
  author={ Deniss Ruder and Andero Uusberg and Kairit Sirts },
  journal={arXiv preprint arXiv:2503.16883},
  year={ 2025 }
}
Comments on this paper