ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14530
171
2
v1v2 (latest)

Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs

20 May 2025
Zhipeng Yang
Junzhuo Li
Siyu Xia
Xuming Hu
    AIFinLRM
ArXiv (abs)PDFHTML
Main:9 Pages
19 Figures
Bibliography:4 Pages
6 Tables
Appendix:16 Pages
Abstract

We show that large language models (LLMs) exhibit an internal chain-of-thought\textit{internal chain-of-thought}internal chain-of-thought: they sequentially decompose and execute composite tasks layer-by-layer. Two claims ground our study: (i) distinct subtasks are learned at different network depths, and (ii) these subtasks are executed sequentially across layers. On a benchmark of 15 two-step composite tasks, we employ layer-from context-masking and propose a novel cross-task patching method, confirming (i). To examine claim (ii), we apply LogitLens to decode hidden states, revealing a consistent layerwise execution pattern. We further replicate our analysis on the real-world TRACE\text{TRACE}TRACE benchmark, observing the same stepwise dynamics. Together, our results enhance LLMs transparency by showing their capacity to internally plan and execute subtasks (or instructions), opening avenues for fine-grained, instruction-level activation steering.

View on arXiv
Comments on this paper