26
0

Parallel Scaling Law for Language Models

Abstract

It is commonly believed that scaling language models should commit a significant space or time cost, by increasing the parameters (parameter scaling) or output tokens (inference-time scaling). We introduce the third and more inference-efficient scaling paradigm: increasing the model's parallel computation during both training and inference time. We apply PP diverse and learnable transformations to the input, execute forward passes of the model in parallel, and dynamically aggregate the PP outputs. This method, namely parallel scaling (ParScale), scales parallel computation by reusing existing parameters and can be applied to any model structure, optimization procedure, data, or task. We theoretically propose a new scaling law and validate it through large-scale pre-training, which shows that a model with PP parallel streams is similar to scaling the parameters by O(logP)O(\log P) while showing superior inference efficiency. For example, ParScale can use up to 22×\times less memory increase and 6×\times less latency increase compared to parameter scaling that achieves the same performance improvement. It can also recycle an off-the-shelf pre-trained model into a parallelly scaled one by post-training on a small amount of tokens, further reducing the training budget. The new scaling law we discovered potentially facilitates the deployment of more powerful models in low-resource scenarios, and provides an alternative perspective for the role of computation in machine learning.

View on arXiv
@article{chen2025_2505.10475,
  title={ Parallel Scaling Law for Language Models },
  author={ Mouxiang Chen and Binyuan Hui and Zeyu Cui and Jiaxi Yang and Dayiheng Liu and Jianling Sun and Junyang Lin and Zhongxin Liu },
  journal={arXiv preprint arXiv:2505.10475},
  year={ 2025 }
}
Comments on this paper