ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11968
32
0

Theoretical Barriers in Bellman-Based Reinforcement Learning

17 February 2025
Brieuc Pinon
Raphaël Jungers
Jean-Charles Delvenne
ArXivPDFHTML
Abstract

Reinforcement Learning algorithms designed for high-dimensional spaces often enforce the Bellman equation on a sampled subset of states, relying on generalization to propagate knowledge across the state space. In this paper, we identify and formalize a fundamental limitation of this common approach. Specifically, we construct counterexample problems with a simple structure that this approach fails to exploit. Our findings reveal that such algorithms can neglect critical information about the problems, leading to inefficiencies. Furthermore, we extend this negative result to another approach from the literature: Hindsight Experience Replay learning state-to-state reachability.

View on arXiv
@article{pinon2025_2502.11968,
  title={ Theoretical Barriers in Bellman-Based Reinforcement Learning },
  author={ Brieuc Pinon and Raphaël Jungers and Jean-Charles Delvenne },
  journal={arXiv preprint arXiv:2502.11968},
  year={ 2025 }
}
Comments on this paper