621
v1v2v3 (latest)

R3: Robust Rubric-Agnostic Reward Models

Main:12 Pages
4 Figures
Bibliography:5 Pages
19 Tables
Appendix:17 Pages
Abstract

Reward models are essential for aligning language model outputs with human preferences, yet existing approaches often lack both controllability and interpretability. These models are typically optimized for narrow objectives, limiting their generalizability to broader downstream tasks. Moreover, their scalar outputs are difficult to interpret without contextual reasoning. To address these limitations, we introduce \shortmethodname\shortmethodname, a novel reward modeling framework that is rubric-agnostic, generalizable across evaluation dimensions, and provides interpretable, reasoned score assignments. \shortmethodname\shortmethodname enables more transparent and flexible evaluation of language models, supporting robust alignment with diverse human values and use cases. Our models, data, and code are available as open source atthis https URL.

View on arXiv
Comments on this paper