ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.11390
41
0

Says Who? Effective Zero-Shot Annotation of Focalization

17 September 2024
Rebecca M. M. Hicke
Yuri Bizzoni
Pascale Feldkamp
R. Kristensen-Mclachlan
ArXivPDFHTML
Abstract

Focalization, the perspective through which narrative is presented, is encoded via a wide range of lexico-grammatical features and is subject to reader interpretation. Even trained annotators frequently disagree on correct labels, suggesting this task is both qualitatively and computationally challenging. In this work, we test how well five contemporary large language model (LLM) families and two baselines perform when annotating short literary excerpts for focalization. Despite the challenging nature of the task, we find that LLMs show comparable performance to trained human annotators, with GPT-4o achieving an average F1 of 84.79%. Further, we demonstrate that the log probabilities output by GPT-family models frequently reflect the difficulty of annotating particular excerpts. Finally, we provide a case study analyzing sixteen Stephen King novels, demonstrating the usefulness of this approach for computational literary studies and the insights gleaned from examining focalization at scale.

View on arXiv
@article{hicke2025_2409.11390,
  title={ Says Who? Effective Zero-Shot Annotation of Focalization },
  author={ Rebecca M. M. Hicke and Yuri Bizzoni and Pascale Feldkamp and Ross Deans Kristensen-McLachlan },
  journal={arXiv preprint arXiv:2409.11390},
  year={ 2025 }
}
Comments on this paper