ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19411
73
3

Code to Think, Think to Code: A Survey on Code-Enhanced Reasoning and Reasoning-Driven Code Intelligence in LLMs

26 February 2025
Dayu Yang
Tianyang Liu
Daoan Zhang
Antoine Simoulin
Xiaoyi Liu
Yuwei Cao
Zhaopu Teng
Xin Qian
Grey Yang
Jiebo Luo
Julian McAuley
    ReLM
    OffRL
    LRM
ArXivPDFHTML
Abstract

In large language models (LLMs), code and reasoning reinforce each other: code offers an abstract, modular, and logic-driven structure that supports reasoning, while reasoning translates high-level goals into smaller, executable steps that drive more advanced code intelligence. In this study, we examine how code serves as a structured medium for enhancing reasoning: it provides verifiable execution paths, enforces logical decomposition, and enables runtime validation. We also explore how improvements in reasoning have transformed code intelligence from basic completion to advanced capabilities, enabling models to address complex software engineering tasks through planning and debugging. Finally, we identify key challenges and propose future research directions to strengthen this synergy, ultimately improving LLM's performance in both areas.

View on arXiv
@article{yang2025_2502.19411,
  title={ Code to Think, Think to Code: A Survey on Code-Enhanced Reasoning and Reasoning-Driven Code Intelligence in LLMs },
  author={ Dayu Yang and Tianyang Liu and Daoan Zhang and Antoine Simoulin and Xiaoyi Liu and Yuwei Cao and Zhaopu Teng and Xin Qian and Grey Yang and Jiebo Luo and Julian McAuley },
  journal={arXiv preprint arXiv:2502.19411},
  year={ 2025 }
}
Comments on this paper