ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.14838
  4. Cited By
From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by
  Step

From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step

23 May 2024
Yuntian Deng
Yejin Choi
Stuart M. Shieber
    ReLMLRM
ArXiv (abs)PDFHTMLHuggingFace (2 upvotes)

Papers citing "From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step"

9 / 59 papers shown
Reasoning Bias of Next Token Prediction Training
Reasoning Bias of Next Token Prediction Training
Pengxiao Lin
Zhongwang Zhang
Zhi-Qin John Xu
LRM
478
2
0
21 Feb 2025
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Alireza Amiri
Xinting Huang
Mark Rofin
Michael Hahn
LRM
1.3K
13
0
04 Feb 2025
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee
Ziyang Cai
Avi Schwarzschild
Kangwook Lee
Dimitris Papailiopoulos
ReLMVLMLRMAI4CE
423
19
0
03 Feb 2025
Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning
Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced ReasoningInternational Conference on Learning Representations (ICLR), 2024
Md Rifat Arefin
G. Subbaraj
Nicolas Angelard-Gontier
Yann LeCun
Irina Rish
Ravid Shwartz-Ziv
C. Pal
LRM
1.0K
4
0
04 Nov 2024
On Memorization of Large Language Models in Logical Reasoning
On Memorization of Large Language Models in Logical Reasoning
Chulin Xie
Yangsibo Huang
Chiyuan Zhang
Da Yu
Xinyun Chen
Bill Yuchen Lin
Bo Li
Badih Ghazi
Ravi Kumar
LRM
457
94
0
30 Oct 2024
ToW: Thoughts of Words Improve Reasoning in Large Language Models
ToW: Thoughts of Words Improve Reasoning in Large Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Zhikun Xu
Ming shen
Jacob Dineen
Zhaonan Li
Xiao Ye
Shijie Lu
Aswin Rrv
Chitta Baral
Ben Zhou
LRM
952
2
0
21 Oct 2024
Optima: Optimizing Effectiveness and Efficiency for LLM-Based
  Multi-Agent System
Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent SystemAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Weize Chen
Qixin Xu
Chen Qian
Cheng Yang
Zhiyuan Liu
Maosong Sun
LLMAG
268
17
0
10 Oct 2024
Internalizing ASR with Implicit Chain of Thought for Efficient
  Speech-to-Speech Conversational LLM
Internalizing ASR with Implicit Chain of Thought for Efficient Speech-to-Speech Conversational LLM
Robin Shing-Hei Yuen
Timothy Tin-Long Tse
Jian Zhu
AuLLM
180
4
0
25 Sep 2024
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoningInternational Conference on Learning Representations (ICLR), 2024
Zayne Sprague
Fangcong Yin
Juan Diego Rodriguez
Dongwei Jiang
Manya Wadhwa
Prasann Singhal
Xinyu Zhao
Xi Ye
Kyle Mahowald
Greg Durrett
ReLMLRM
650
232
0
18 Sep 2024
Previous
12