99
6

Reinforcement Learning Enhanced LLMs: A Survey

Abstract

Reinforcement learning (RL) enhanced large language models (LLMs), particularly exemplified by DeepSeek-R1, have exhibited outstanding performance. Despite the effectiveness in improving LLM capabilities, its implementation remains highly complex, requiring complex algorithms, reward modeling strategies, and optimization techniques. This complexity poses challenges for researchers and practitioners in developing a systematic understanding of RL-enhanced LLMs. Moreover, the absence of a comprehensive survey summarizing existing research on RL-enhanced LLMs has limited progress in this domain, hindering further advancements.In this work, we are going to make a systematic review of the most up-to-date state of knowledge on RL-enhanced LLMs, attempting to consolidate and analyze the rapidly growing research in this field, helping researchers understand the current challenges and advancements. Specifically, we (1) detail the basics of RL; (2) introduce popular RL-enhanced LLMs; (3) review researches on two widely-used reward model-based RL techniques: Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning from AI Feedback (RLAIF); and (4) explore Direct Preference Optimization (DPO), a set of methods that bypass the reward model to directly use human preference data for aligning LLM outputs with human expectations. We will also point out current challenges and deficiencies of existing methods and suggest some avenues for further improvements. Project page of this work can be found atthis https URL.

View on arXiv
@article{wang2025_2412.10400,
  title={ Reinforcement Learning Enhanced LLMs: A Survey },
  author={ Shuhe Wang and Shengyu Zhang and Jie Zhang and Runyi Hu and Xiaoya Li and Tianwei Zhang and Jiwei Li and Fei Wu and Guoyin Wang and Eduard Hovy },
  journal={arXiv preprint arXiv:2412.10400},
  year={ 2025 }
}
Comments on this paper