204
v1v2 (latest)

Efficient Inference Using Large Language Models with Limited Human Data: Fine-Tuning then Rectification

Main:27 Pages
5 Figures
Bibliography:4 Pages
2 Tables
Appendix:13 Pages
Abstract

Driven by recent advances in artificial intelligence (AI), a growing literature has demonstrated the potential for using large language models (LLMs) as scalable surrogates to generate human-like responses in many business applications. Two common approaches to improve the performance of LLMs include: fine-tuning, which aligns LLMs more closely with human responses, and rectification, which corrects biases in LLM outputs. In this paper, we develop a two-stage framework that combines fine-tuning and rectification, and optimally allocates limited labeled samples across the two stages. Unlike the conventional objective that minimizes the mean squared prediction errors, we propose to minimize the variance of the prediction errors as the fine-tuning objective, which is optimal for the downstream rectification stage. Building on this insight, we leverage the scaling law of fine-tuning to optimally allocate the limited labeled human data between the fine-tuning and rectification stages. Our empirical analysis validates the fine-tuning scaling law and confirms that our proposed optimal allocation rule reliably identifies the optimal sample allocation. We demonstrate substantial efficiency gains in estimation and inference performance relative to fine-tuning or rectification alone, or to employing the standard mean-squared error objective within the fine-tuning then rectification framework, resulting in significant cost savings for reliable business decisions.

View on arXiv
Comments on this paper