41
0

Sentence-level Reward Model can Generalize Better for Aligning LLM from Human Preference

Abstract

Learning reward models from human preference datasets and subsequently optimizing language models via reinforcement learning has emerged as a fundamental paradigm for aligning LLMs with human preferences. The performance of the reward model plays a crucial role in the effectiveness of alignment. Previous reward models operate at a coarse-grained level, requiring the generation of a complete response to obtain a reward value. The sparse reward may present challenges for downstream reinforcement learning. While recent efforts have attempted to learn token-level reward models, the lack of explicit semantic information makes it difficult to model the credit of every individual token. In this paper, we propose assigning scores to every sentence, introducing an intermediate-grained reward model. By segmenting the complete response into sentences and applying differential operations to reward output at the start and end positions of each sentence, we can effectively model the rewards of sentences. Moreover, a novel attention mechanism is introduced to aggregate the scores of all sentences into a response-level score, which allows it to be trained using the Bradley-Terry model. On common benchmarks, our method outperforms the response-level reward model by 2.7% on RewardBench (for reward modeling evaluation) and surpasses all baselines on AlpacaEval (for alignment evaluation).

View on arXiv
@article{qiu2025_2503.04793,
  title={ Sentence-level Reward Model can Generalize Better for Aligning LLM from Human Preference },
  author={ Wenjie Qiu and Yi-Chen Li and Xuqin Zhang and Tianyi Zhang and Yihang Zhang and Zongzhang Zhang and Yang Yu },
  journal={arXiv preprint arXiv:2503.04793},
  year={ 2025 }
}
Comments on this paper