44
1

Multi-head Reward Aggregation Guided by Entropy

Abstract

Aligning large language models (LLMs) with safety guidelines typically involves reinforcement learning from human feedback (RLHF), relying on human-generated preference annotations. However, assigning consistent overall quality ratings is challenging, prompting recent research to shift towards detailed evaluations based on multiple specific safety criteria. This paper uncovers a consistent observation: safety rules characterized by high rating entropy are generally less reliable in identifying responses preferred by humans. Leveraging this finding, we introduce ENCORE, a straightforward entropy-guided approach that composes multi-head rewards by downweighting rules exhibiting high rating entropy. Theoretically, we demonstrate that rules with elevated entropy naturally receive minimal weighting in the Bradley-Terry optimization framework, justifying our entropy-based penalization. Through extensive experiments on RewardBench safety tasks, our method significantly surpasses several competitive baselines, including random weighting, uniform weighting, single-head Bradley-Terry models, and LLM-based judging methods. Our proposed approach is training-free, broadly applicable to various datasets, and maintains interpretability, offering a practical and effective solution for multi-attribute reward modeling.

View on arXiv
@article{li2025_2503.20995,
  title={ Multi-head Reward Aggregation Guided by Entropy },
  author={ Xiaomin Li and Xupeng Chen and Jingxuan Fan and Eric Hanchen Jiang and Mingye Gao },
  journal={arXiv preprint arXiv:2503.20995},
  year={ 2025 }
}
Comments on this paper