Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning

Post-training of pre-trained LLMs, which typically consists of the supervised fine-tuning (SFT) stage and the preference learning (RLHF or DPO) stage, is crucial to effective and safe LLM applications. The widely adopted approach in post-training popular open-source LLMs is to sequentially perform SFT and RLHF/DPO. However, sequential training is sub-optimal in terms of SFT and RLHF/DPO trade-off: the LLM gradually forgets about the first stage's training when undergoing the second stage's training. We theoretically prove the sub-optimality of sequential post-training. Furthermore, we propose a practical joint post-training framework with theoretical convergence guarantees and empirically outperforms sequential post-training framework, while having similar computational cost. Our code is available atthis https URL.
View on arXiv@article{fernando2025_2410.15483, title={ Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning }, author={ Heshan Fernando and Han Shen and Parikshit Ram and Yi Zhou and Horst Samulowitz and Nathalie Baracaldo and Tianyi Chen }, journal={arXiv preprint arXiv:2410.15483}, year={ 2025 } }