12

Towards Active Synthetic Data Generation for Finetuning Language Models

Samuel Kessler
Menglin Xia
Daniel Madrigal Diaz
Dongge Han
Helia Heshemi
Saravan Rajmohan
Victor Ruehle
Jordan T. Ash
Abstract

A common and effective means for improving language model capabilities involves finetuning a ``student'' language model's parameters on generations from a more proficient ``teacher'' model. Termed ``synthetic data'', these generations are often produced before any student finetuning, but some work has considered generating new synthetic samples as training progresses. This paper studies and advocates for the latter case, where data are generated in an iterative, closed-loop fashion that is guided by the current state of the student model. For a fixed budget of generated samples, or a budget in terms of compute spent querying a teacher, we show that this curation of finetuning data affords improved student performance over static generation. Further, while there have been several LLM-specific methods proposed that operate in this regime, we find that simple, inexpensive selection criteria from the active learning literature tend to be most performant. We validate these claims across four mathematical and logical reasoning datasets using four different small language models.

View on arXiv
Main:11 Pages
14 Figures
Bibliography:7 Pages
3 Tables
Appendix:18 Pages
Comments on this paper