ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23368
59
0
v1v2 (latest)

Threading the Needle: Reweaving Chain-of-Thought Reasoning to Explain Human Label Variation

29 May 2025
Beiduo Chen
Yang Liu
Anna Korhonen
Barbara Plank
    LRM
ArXiv (abs)PDFHTML
Main:7 Pages
10 Figures
Bibliography:5 Pages
14 Tables
Appendix:10 Pages
Abstract

The recent rise of reasoning-tuned Large Language Models (LLMs)--which generate chains of thought (CoTs) before giving the final answer--has attracted significant attention and offers new opportunities for gaining insights into human label variation, which refers to plausible differences in how multiple annotators label the same data instance. Prior work has shown that LLM-generated explanations can help align model predictions with human label distributions, but typically adopt a reverse paradigm: producing explanations based on given answers. In contrast, CoTs provide a forward reasoning path that may implicitly embed rationales for each answer option, before generating the answers. We thus propose a novel LLM-based pipeline enriched with linguistically-grounded discourse segmenters to extract supporting and opposing statements for each answer option from CoTs with improved accuracy. We also propose a rank-based HLV evaluation framework that prioritizes the ranking of answers over exact scores, which instead favor direct comparison of label distributions. Our method outperforms a direct generation method as well as baselines on three datasets, and shows better alignment of ranking methods with humans, highlighting the effectiveness of our approach.

View on arXiv
@article{chen2025_2505.23368,
  title={ Threading the Needle: Reweaving Chain-of-Thought Reasoning to Explain Human Label Variation },
  author={ Beiduo Chen and Yang Janet Liu and Anna Korhonen and Barbara Plank },
  journal={arXiv preprint arXiv:2505.23368},
  year={ 2025 }
}
Comments on this paper