Preference learning is critical for aligning large language models (LLMs) with human values, yet its success hinges on high-quality datasets comprising three core components: Preference \textbf{A}nnotations, \textbf{I}nstructions, and \textbf{R}esponse Pairs. Current approaches conflate these components, obscuring their individual impacts and hindering systematic optimization. In this work, we propose \textbf{AIR}, a component-wise analysis framework that systematically isolates and optimizes each component while evaluating their synergistic effects. Through rigorous experimentation, AIR reveals actionable principles: annotation simplicity (point-wise generative scoring), instruction inference stability (variance-based filtering across LLMs), and response pair quality (moderate margins + high absolute scores). When combined, these principles yield +5.3 average gains over baseline method, even with only 14k high-quality pairs. Our work shifts preference dataset design from ad hoc scaling to component-aware optimization, offering a blueprint for efficient, reproducible alignment.
View on arXiv@article{he2025_2504.03612, title={ AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset }, author={ Bingxiang He and Wenbin Zhang and Jiaxi Song and Cheng Qian and Zixuan Fu and Bowen Sun and Ning Ding and Haiwen Hong and Longtao Huang and Hui Xue and Ganqu Cui and Wanxiang Che and Zhiyuan Liu and Maosong Sun }, journal={arXiv preprint arXiv:2504.03612}, year={ 2025 } }