ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11912
62
0

Breaking Habits: On the Role of the Advantage Function in Learning Causal State Representations

13 June 2025
Miguel Suau
    CML
ArXiv (abs)PDFHTML
Main:9 Pages
12 Figures
Bibliography:2 Pages
3 Tables
Appendix:5 Pages
Abstract

Recent work has shown that reinforcement learning agents can develop policies that exploit spurious correlations between rewards and observations. This phenomenon, known as policy confounding, arises because the agent's policy influences both past and future observation variables, creating a feedback loop that can hinder the agent's ability to generalize beyond its usual trajectories. In this paper, we show that the advantage function, commonly used in policy gradient methods, not only reduces the variance of gradient estimates but also mitigates the effects of policy confounding. By adjusting action values relative to the state representation, the advantage function downweights state-action pairs that are more likely under the current policy, breaking spurious correlations and encouraging the agent to focus on causal factors. We provide both analytical and empirical evidence demonstrating that training with the advantage function leads to improved out-of-trajectory performance.

View on arXiv
@article{suau2025_2506.11912,
  title={ Breaking Habits: On the Role of the Advantage Function in Learning Causal State Representations },
  author={ Miguel Suau },
  journal={arXiv preprint arXiv:2506.11912},
  year={ 2025 }
}
Comments on this paper