ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07257
14
0

DARLR: Dual-Agent Offline Reinforcement Learning for Recommender Systems with Dynamic Reward

12 May 2025
Yi Zhang
Ruihong Qiu
Xuwei Xu
Jiajun Liu
Sen Wang
    OffRL
ArXivPDFHTML
Abstract

Model-based offline reinforcement learning (RL) has emerged as a promising approach for recommender systems, enabling effective policy learning by interacting with frozen world models. However, the reward functions in these world models, trained on sparse offline logs, often suffer from inaccuracies. Specifically, existing methods face two major limitations in addressing this challenge: (1) deterministic use of reward functions as static look-up tables, which propagates inaccuracies during policy learning, and (2) static uncertainty designs that fail to effectively capture decision risks and mitigate the impact of these inaccuracies. In this work, a dual-agent framework, DARLR, is proposed to dynamically update world models to enhance recommendation policies. To achieve this, a \textbf{\textit{selector}} is introduced to identify reference users by balancing similarity and diversity so that the \textbf{\textit{recommender}} can aggregate information from these users and iteratively refine reward estimations for dynamic reward shaping. Further, the statistical features of the selected users guide the dynamic adaptation of an uncertainty penalty to better align with evolving recommendation requirements. Extensive experiments on four benchmark datasets demonstrate the superior performance of DARLR, validating its effectiveness. The code is available atthis https URL.

View on arXiv
@article{zhang2025_2505.07257,
  title={ DARLR: Dual-Agent Offline Reinforcement Learning for Recommender Systems with Dynamic Reward },
  author={ Yi Zhang and Ruihong Qiu and Xuwei Xu and Jiajun Liu and Sen Wang },
  journal={arXiv preprint arXiv:2505.07257},
  year={ 2025 }
}
Comments on this paper