ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05584
53
4

Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?

17 February 2025
Xueru Wen
Jie Lou
Y. Lu
Hongyu Lin
Xing Yu
Xinyu Lu
Ben He
Xianpei Han
Debing Zhang
Le Sun
    ALM
ArXivPDFHTML
Abstract

Reward Models (RMs) are crucial for aligning language models with human preferences. Currently, the evaluation of RMs depends on measuring accuracy against a validation set of manually annotated preference data. Although this method is straightforward and widely adopted, the relationship between RM accuracy and downstream policy performance remains under-explored. In this work, we conduct experiments in a synthetic setting to investigate how differences in RM measured by accuracy translate into gaps in optimized policy performance. Our findings reveal that while there is a weak positive correlation between accuracy and downstream performance, policies optimized towards RMs with similar accuracy can exhibit quite different performance. Moreover, we discover that the way of measuring accuracy significantly impacts its ability to predict the final policy performance. Through the lens of the Regressional Goodhart effect, we recognize that accuracy, when used for measuring RM quality, can fail to fully capture the potential RM overoptimization. This underscores the inadequacy of relying solely on accuracy to reflect their impact on policy optimization.

View on arXiv
@article{wen2025_2410.05584,
  title={ Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree? },
  author={ Xueru Wen and Jie Lou and Yaojie Lu and Hongyu Lin and Xing Yu and Xinyu Lu and Ben He and Xianpei Han and Debing Zhang and Le Sun },
  journal={arXiv preprint arXiv:2410.05584},
  year={ 2025 }
}
Comments on this paper