68

RL-Guided Data Selection for Language Model Finetuning

Main:4 Pages
4 Figures
Bibliography:2 Pages
6 Tables
Appendix:6 Pages
Abstract

Data selection for finetuning Large Language Models (LLMs) can be framed as a budget-constrained optimization problem: maximizing a model's downstream performance under a strict training data budget. Solving this problem is generally intractable, and existing approximate approaches are pretraining-oriented and transfer poorly to the fine-tuning setting. We reformulate this problem as a tractable Markov Decision Process (MDP) and train agents using various Reinforcement Learning (RL) methods to learn optimal data selection policies, guided by an efficient, proxy-model-based reward signal. Across four datasets, training on a 5%5\% subset selected by our approach matches or outperforms fine-tuning on the full dataset by up to 10.810.8 accuracy points, while cutting wall-clock training time by up to 2×2 \times, highlighting the promise of RL-guided data selection.

View on arXiv
Comments on this paper