SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters

Existing preference optimization objectives for language model alignment require additional hyperparameters that must be extensively tuned to achieve optimal performance, increasing both the complexity and time required for fine-tuning large language models. In this paper, we propose a simple yet effective hyperparameter-free preference optimization algorithm for alignment. We observe that promising performance can be achieved simply by optimizing inverse perplexity, which is calculated as the inverse of the exponentiated average log-likelihood of the chosen and rejected responses in the preference dataset. The resulting simple learning objective, SimPER, is easy to implement and eliminates the need for expensive hyperparameter tuning and a reference model, making it both computationally and memory efficient. Extensive experiments on widely used real-world benchmarks, including MT-Bench, AlpacaEval 2, and 10 key benchmarks of the Open LLM Leaderboard with 5 base models, demonstrate that SimPER consistently and significantly outperforms existing approaches-even without any hyperparameters or a reference model . For example, despite its simplicity, SimPER outperforms state-of-the-art methods by up to 5.7 points on AlpacaEval 2 and achieves the highest average ranking across 10 benchmarks on the Open LLM Leaderboard. The source code for SimPER is publicly available at:this https URL.
View on arXiv@article{xiao2025_2502.00883, title={ SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters }, author={ Teng Xiao and Yige Yuan and Zhengyu Chen and Mingxiao Li and Shangsong Liang and Zhaochun Ren and Vasant G Honavar }, journal={arXiv preprint arXiv:2502.00883}, year={ 2025 } }