42
1

A Tutorial on LLM Reasoning: Relevant Methods behind ChatGPT o1

Abstract

OpenAI o1 has shown that applying reinforcement learning to integrate reasoning steps directly during inference can significantly improve a model's reasoning capabilities. This result is exciting as the field transitions from the conventional autoregressive method of generating answers to a more deliberate approach that models the slow-thinking process through step-by-step reasoning training. Reinforcement learning plays a key role in both the model's training and decoding processes. In this article, we present a comprehensive formulation of reasoning problems and investigate the use of both model-based and model-free approaches to better support this slow-thinking framework.

View on arXiv
@article{wang2025_2502.10867,
  title={ A Tutorial on LLM Reasoning: Relevant Methods behind ChatGPT o1 },
  author={ Jun Wang },
  journal={arXiv preprint arXiv:2502.10867},
  year={ 2025 }
}
Comments on this paper