19
0

A Scaling Law for Token Efficiency in LLM Fine-Tuning Under Fixed Compute Budgets

Abstract

We introduce a scaling law for fine-tuning large language models (LLMs) under fixed compute budgets that explicitly accounts for data composition. Conventional approaches measure training data solely by total tokens, yet the number of examples and their average token length -- what we term \emph{dataset volume} -- play a decisive role in model performance. Our formulation is tuned following established procedures. Experiments on the BRICC dataset \cite{salavati2024reducing} and subsets of the MMLU dataset \cite{hendrycks2021measuringmassivemultitasklanguage}, evaluated under multiple subsampling strategies, reveal that data composition significantly affects token efficiency. These results motivate refined scaling laws for practical LLM fine-tuning in resource-constrained settings.

View on arXiv
@article{lagasse2025_2505.06150,
  title={ A Scaling Law for Token Efficiency in LLM Fine-Tuning Under Fixed Compute Budgets },
  author={ Ryan Lagasse and Aidan Kiernans and Avijit Ghosh and Shiri Dori-Hacohen },
  journal={arXiv preprint arXiv:2505.06150},
  year={ 2025 }
}
Comments on this paper