Recent advancements in large language models (LLMs) have accelerated progress toward artificial general intelligence, yet their potential to generate harmful content poses critical safety challenges. Existing alignment methods often struggle to cover diverse safety scenarios and remain vulnerable to adversarial attacks. In this work, we propose Ex-Ante Reasoning Preference Optimization (ERPO), a novel safety alignment framework that equips LLMs with explicit preemptive reasoning through Chain-of-Thought and provides clear evidence for safety judgments by embedding predefined safety rules. Specifically, our approach consists of three stages: first, equipping the model with Ex-Ante reasoning through supervised fine-tuning (SFT) using a constructed reasoning module; second, enhancing safety, usefulness, and efficiency via Direct Preference Optimization (DPO); and third, mitigating inference latency with a length-controlled iterative preference optimization strategy. Experiments on multiple open-source LLMs demonstrate that ERPO significantly enhances safety performance while maintaining response efficiency.
View on arXiv@article{feng2025_2504.02725, title={ ERPO: Advancing Safety Alignment via Ex-Ante Reasoning Preference Optimization }, author={ Kehua Feng and Keyan Ding and Jing Yu and Menghan Li and Yuhao Wang and Tong Xu and Xinda Wang and Qiang Zhang and Huajun Chen }, journal={arXiv preprint arXiv:2504.02725}, year={ 2025 } }