ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.13156
37
11

RRM: Robust Reward Model Training Mitigates Reward Hacking

20 September 2024
Tianqi Liu
Wei Xiong
Jie Jessie Ren
Lichang Chen
Junru Wu
Rishabh Joshi
Yang Gao
Jiaming Shen
Zhen Qin
Tianhe Yu
Daniel Sohn
Anastasiia Makarova
Jeremiah Liu
Yuan Liu
Bilal Piot
Abe Ittycheriah
Aviral Kumar
Mohammad Saleh
    AAML
ArXivPDFHTML
Abstract

Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. However, traditional RM training, which relies on response pairs tied to specific prompts, struggles to disentangle prompt-driven preferences from prompt-independent artifacts, such as response length and format. In this work, we expose a fundamental limitation of current RM training methods, where RMs fail to effectively distinguish between contextual signals and irrelevant artifacts when determining preferences. To address this, we introduce a causal framework that learns preferences independent of these artifacts and propose a novel data augmentation technique designed to eliminate them. Extensive experiments show that our approach successfully filters out undesirable artifacts, yielding a more robust reward model (RRM). Our RRM improves the performance of a pairwise reward model trained on Gemma-2-9b-it, on RewardBench, increasing accuracy from 80.61% to 84.15%. Additionally, we train two DPO policies using both the RM and RRM, demonstrating that the RRM significantly enhances DPO-aligned policies, improving MT-Bench scores from 7.27 to 8.31 and length-controlled win-rates in AlpacaEval-2 from 33.46% to 52.49%.

View on arXiv
@article{liu2025_2409.13156,
  title={ RRM: Robust Reward Model Training Mitigates Reward Hacking },
  author={ Tianqi Liu and Wei Xiong and Jie Ren and Lichang Chen and Junru Wu and Rishabh Joshi and Yang Gao and Jiaming Shen and Zhen Qin and Tianhe Yu and Daniel Sohn and Anastasiia Makarova and Jeremiah Liu and Yuan Liu and Bilal Piot and Abe Ittycheriah and Aviral Kumar and Mohammad Saleh },
  journal={arXiv preprint arXiv:2409.13156},
  year={ 2025 }
}
Comments on this paper