Existing LLM-based agents have achieved strong performance on held-in tasks, but their generalizability to unseen tasks remains poor. Hence, some recent work focus on fine-tuning the policy model with more diverse tasks to improve the generalizability. In this work, we find that finetuning a reward model to guide the policy model is more robust than directly finetuning the policy model. Based on this finding, we propose AgentRM, a generalizable reward model, to guide the policy model for effective test-time search. We comprehensively investigate three approaches to construct the reward model, including explicit reward modeling, implicit reward modeling and LLM-as-a-judge. We then use AgentRM to guide the answer generation with Best-of-N sampling and step-level beam search. On four types of nine agent tasks, AgentRM enhances the base policy model by points on average, surpassing the top general agent by . Moreover, it demonstrates weak-to-strong generalization, yielding greater improvement of on LLaMA-3-70B policy model. As for the specializability, AgentRM can also boost a finetuned policy model and outperform the top specialized agent by on three held-in tasks. Further analysis verifies its effectiveness in test-time scaling. Codes will be released to facilitate the research in this area.
View on arXiv@article{xia2025_2502.18407, title={ AgentRM: Enhancing Agent Generalization with Reward Modeling }, author={ Yu Xia and Jingru Fan and Weize Chen and Siyu Yan and Xin Cong and Zhong Zhang and Yaxi Lu and Yankai Lin and Zhiyuan Liu and Maosong Sun }, journal={arXiv preprint arXiv:2502.18407}, year={ 2025 } }