283
v1v2v3 (latest)

Scalable Fine-tuning from Multiple Data Sources: A First-Order Approximation Approach

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Main:9 Pages
4 Figures
Bibliography:3 Pages
8 Tables
Appendix:5 Pages
Abstract

We study the problem of fine-tuning a language model (LM) for a target task by optimally using the information from nn auxiliary tasks. This problem has broad applications in NLP, such as targeted instruction tuning and data selection in chain-of-thought fine-tuning. The key challenge of this problem is that not all auxiliary tasks are beneficial in improving the performance of the target task. Thus, selecting the right subset of auxiliary tasks is crucial. Conventional subset selection methods, such as forward and backward stepwise selection, are unsuitable for LM fine-tuning because they require repeated training on subsets of auxiliary tasks. This paper introduces a new algorithm for estimating model fine-tuning performance without requiring repeated training. Our algorithm first performs multitask training using data from all tasks to obtain a meta initialization. Then, we approximate the model fine-tuning loss of a subset using functional values and gradients from the meta initialization. Empirically, we find that this gradient-based approximation holds with remarkable accuracy for twelve transformer-based LMs. Thus, we can now estimate fine-tuning performances on CPUs within a few seconds. Finally, we fine-tune the pretrained base model once on the selected subset of tasks. We conduct extensive experiments to validate this approach, delivering a speedup of 30×30\times over conventional subset selection while incurring only 1%1\% error of the true fine-tuning performances. In downstream evaluations involving both instruction tuning and chain-of-thought fine-tuning, this loss-based selection approach improves over prior gradient or representation similarity-based methods for subset selection by up to 3.8%3.8\%.

View on arXiv
Comments on this paper