RL-Guided Data Selection for Language Model Finetuning
- OffRL

Data selection for finetuning Large Language Models (LLMs) can be framed as a budget-constrained optimization problem: maximizing a model's downstream performance under a strict training data budget. Solving this problem is generally intractable, and existing approximate approaches are pretraining-oriented and transfer poorly to the fine-tuning setting. We reformulate this problem as a tractable Markov Decision Process (MDP) and train agents using various Reinforcement Learning (RL) methods to learn optimal data selection policies, guided by an efficient, proxy-model-based reward signal. Across four datasets, training on a subset selected by our approach matches or outperforms fine-tuning on the full dataset by up to accuracy points, while cutting wall-clock training time by up to , highlighting the promise of RL-guided data selection.
View on arXiv