90
1

Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning

Abstract

The ability to achieve long-term goals is a key challenge in the current development of large language models (LLMs). To address this, pre-trained LLMs can be fine-tuned with reinforcement learning (RL) to explore solutions that optimize a given goal. However, exploration with LLMs is difficult, as a balance has to be struck between discovering new solutions and staying close enough to the pre-trained model, so as not to degrade basic capabilities. This is typically controlled with a Kullback-Leibler (KL) penalty. In this paper, we investigate the exploration dynamics of a small language model on a simple arithmetic task. We show how varying degrees of pre-training influence exploration and demonstrate the importance of "critical tokens" which have a dramatic impact on the final outcome. Consequently, we introduce a simple modification to the KL penalty that favors exploration on critical tokens, increasing the efficiency of the RL fine-tuning stage.

View on arXiv
@article{vassoyan2025_2502.06533,
  title={ Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning },
  author={ Jean Vassoyan and Nathanaël Beau and Roman Plaud },
  journal={arXiv preprint arXiv:2502.06533},
  year={ 2025 }
}
Comments on this paper