ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11019
34
0

Residual Policy Gradient: A Reward View of KL-regularized Objective

14 March 2025
Pengcheng Wang
Xinghao Zhu
Yuxin Chen
Chenfeng Xu
M. Tomizuka
Chenran Li
ArXivPDFHTML
Abstract

Reinforcement Learning and Imitation Learning have achieved widespread success in many domains but remain constrained during real-world deployment. One of the main issues is the additional requirements that were not considered during training. To address this challenge, policy customization has been introduced, aiming to adapt a prior policy while preserving its inherent properties and meeting new task-specific requirements. A principled approach to policy customization is Residual Q-Learning (RQL), which formulates the problem as a Markov Decision Process (MDP) and derives a family of value-based learning algorithms. However, RQL has not yet been applied to policy gradient methods, which restricts its applicability, especially in tasks where policy gradient has already proven more effective. In this work, we first derive a concise form of Soft Policy Gradient as a preliminary. Building on this, we introduce Residual Policy Gradient (RPG), which extends RQL to policy gradient methods, allowing policy customization in gradient-based RL settings. With the view of RPG, we rethink the KL-regularized objective widely used in RL fine-tuning. We show that under certain assumptions, KL-regularized objective leads to a maximum-entropy policy that balances the inherent properties and task-specific requirements on a reward-level. Our experiments in MuJoCo demonstrate the effectiveness of Soft Policy Gradient and Residual Policy Gradient.

View on arXiv
@article{wang2025_2503.11019,
  title={ Residual Policy Gradient: A Reward View of KL-regularized Objective },
  author={ Pengcheng Wang and Xinghao Zhu and Yuxin Chen and Chenfeng Xu and Masayoshi Tomizuka and Chenran Li },
  journal={arXiv preprint arXiv:2503.11019},
  year={ 2025 }
}
Comments on this paper