ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.01113
69
0

Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Arithmetic Reasoning

2 December 2024
Keito Kudo
Yoichi Aoki
Tatsuki Kuribayashi
Shusaku Sone
Masaya Taniguchi
Ana Brassard
Keisuke Sakaguchi
Kentaro Inui
    ReLM
    LRM
ArXivPDFHTML
Abstract

This study investigates the internal reasoning process of language models during arithmetic multi-step reasoning, motivated by the question of when they internally form their answers during reasoning. Particularly, we inspect whether the answer is determined before or after chain-of-thought (CoT) begins to determine whether models follow a post-hoc Think-to-Talk mode or a step-by-step Talk-to-Think mode of explanation. Through causal probing experiments in controlled arithmetic reasoning tasks, we found systematic internal reasoning patterns across models in our case study; for example, single-step subproblems are solved before CoT begins, and more complicated multi-step calculations are performed during CoT.

View on arXiv
@article{kudo2025_2412.01113,
  title={ Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Arithmetic Reasoning },
  author={ Keito Kudo and Yoichi Aoki and Tatsuki Kuribayashi and Shusaku Sone and Masaya Taniguchi and Ana Brassard and Keisuke Sakaguchi and Kentaro Inui },
  journal={arXiv preprint arXiv:2412.01113},
  year={ 2025 }
}
Comments on this paper