43
1

Improving LLM General Preference Alignment via Optimistic Online Mirror Descent

Abstract

Reinforcement learning from human feedback (RLHF) has demonstrated remarkable effectiveness in aligning large language models (LLMs) with human preferences. Many existing alignment approaches rely on the Bradley-Terry (BT) model assumption, which assumes the existence of a ground-truth reward for each prompt-response pair. However, this assumption can be overly restrictive when modeling complex human preferences. In this paper, we drop the BT model assumption and study LLM alignment under general preferences, formulated as a two-player game. Drawing on theoretical insights from learning in games, we integrate optimistic online mirror descent into our alignment framework to approximate the Nash policy. Theoretically, we demonstrate that our approach achieves an O(T1)O(T^{-1}) bound on the duality gap, improving upon the previous O(T1/2)O(T^{-1/2}) result. More importantly, we implement our method and show through experiments that it outperforms state-of-the-art RLHF algorithms across multiple representative benchmarks.

View on arXiv
@article{zhang2025_2502.16852,
  title={ Improving LLM General Preference Alignment via Optimistic Online Mirror Descent },
  author={ Yuheng Zhang and Dian Yu and Tao Ge and Linfeng Song and Zhichen Zeng and Haitao Mi and Nan Jiang and Dong Yu },
  journal={arXiv preprint arXiv:2502.16852},
  year={ 2025 }
}
Comments on this paper