35
0

Larger or Smaller Reward Margins to Select Preferences for Alignment?

Abstract

Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either explicit or implicit reward margins, they often provide contradictory evaluations for the same data. To address this issue, we introduce the alignment potential metric, which quantifies the gap from the model's current implicit reward margin to the target explicit reward margin, thereby estimating the model's potential to align with the preference data. Empirical results demonstrate that training on data selected by this metric consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives. Furthermore, our method extends to self-play data generation frameworks, where the metric is used to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-art (SOTA) results across various training settings and demonstrates continuous improvements in alignment performance as dataset size and training iterations increase.

View on arXiv
@article{huang2025_2503.01864,
  title={ Larger or Smaller Reward Margins to Select Preferences for Alignment? },
  author={ Kexin Huang and Junkang Wu and Ziqian Chen and Xue Wang and Jinyang Gao and Bolin Ding and Jiancan Wu and Xiangnan He and Xiang Wang },
  journal={arXiv preprint arXiv:2503.01864},
  year={ 2025 }
}
Comments on this paper