Param for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost

The post-training phase of large language models is essential for enhancing capabilities such as instruction-following, reasoning, and alignment with human preferences. However, it demands extensive high-quality data and poses risks like overfitting, alongside significant computational costs due to repeated post-training and evaluation after each base model update. This paper introduces , a novel method that streamlines post-training by transferring knowledge from an existing post-trained model to a newly updated base model with ZERO additional training. By computing the difference between post-trained model weights () and base model weights (), and adding this to the updated base model (), we define Model as: . This approach surprisingly equips the new base model with post-trained capabilities, achieving performance comparable to direct post-training. We did analysis on LLama3, Llama3.1, Qwen, and DeepSeek-distilled models. Results indicate Model effectively replicates traditional post-training. For example, the Model obtained from 70B Llama3-inst, Llama3-base, Llama3.1-base models attains approximately 95\% of Llama3.1-inst model's performance on average. brings a new perspective on how to fully leverage models in the open-weight community, where checkpoints for base and instruct models are readily available and frequently updated, by providing a cost-free framework to accelerate the iterative cycle of model development.
View on arXiv@article{cao2025_2504.21023, title={ Param$Δ$ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost }, author={ Sheng Cao and Mingrui Wu and Karthik Prasad and Yuandong Tian and Zechun Liu }, journal={arXiv preprint arXiv:2504.21023}, year={ 2025 } }