24
5

GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment

Abstract

Large Language Models (LLMs) exhibit impressive capabilities but require careful alignment with human preferences. Traditional training-time methods finetune LLMs using human preference datasets but incur significant training costs and require repeated training to handle diverse user preferences. Test-time alignment methods address this by using reward models (RMs) to guide frozen LLMs without retraining. However, existing test-time approaches rely on trajectory-level RMs which are designed to evaluate complete responses, making them unsuitable for autoregressive text generation that requires computing next-token rewards from partial responses. To address this, we introduce GenARM, a test-time alignment approach that leverages the Autoregressive Reward Model--a novel reward parametrization designed to predict next-token rewards for efficient and effective autoregressive generation. Theoretically, we demonstrate that this parametrization can provably guide frozen LLMs toward any distribution achievable by traditional RMs within the KL-regularized reinforcement learning framework. Experimental results show that GenARM significantly outperforms prior test-time alignment baselines and matches the performance of training-time methods. Additionally, GenARM enables efficient weak-to-strong guidance, aligning larger LLMs with smaller RMs without the high costs of training larger models. Furthermore, GenARM supports multi-objective alignment, allowing real-time trade-offs between preference dimensions and catering to diverse user preferences without retraining. Our project page is available at:this https URL.

View on arXiv
@article{xu2025_2410.08193,
  title={ GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment },
  author={ Yuancheng Xu and Udari Madhushani Sehwag and Alec Koppel and Sicheng Zhu and Bang An and Furong Huang and Sumitra Ganesh },
  journal={arXiv preprint arXiv:2410.08193},
  year={ 2025 }
}
Comments on this paper