24
3

Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit

Abstract

One of the main challenges in optimal scaling of large language models (LLMs) is the prohibitive cost of hyperparameter tuning, particularly learning rate η\eta and batch size BB. While techniques like μ\muP (Yang et al., 2022) provide scaling rules for optimal η\eta transfer in the infinite model size limit, the optimal scaling behavior in the infinite data size limit remains unknown. We fill in this gap by observing for the first time an intricate dependence of optimal η\eta scaling on the pretraining token budget TT, BB and its relation to the critical batch size BcritB_\mathrm{crit}, which we measure to evolve as BcritTB_\mathrm{crit} \propto T. Furthermore, we show that the optimal batch size is positively correlated with BcritB_\mathrm{crit}: keeping it fixed becomes suboptimal over time even if learning rate is scaled optimally. Surprisingly, our results demonstrate that the observed optimal η\eta and BB dynamics are preserved with μ\muP model scaling, challenging the conventional view of BcritB_\mathrm{crit} dependence solely on loss value. Complementing optimality, we examine the sensitivity of loss to changes in learning rate, where we find the sensitivity to decrease with increase of TT and to remain constant with μ\muP model scaling. We hope our results make the first step towards a unified picture of the joint optimal data and model scaling.

View on arXiv
Comments on this paper