477

DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents

Main:10 Pages
15 Figures
Bibliography:3 Pages
5 Tables
Appendix:18 Pages
Abstract

Hierarchical Reinforcement Learning (HRL) agents often struggle with long-horizon visual planning due to their reliance on error-prone distance metrics. We propose Discrete Hierarchical Planning (DHP), a method that replaces continuous distance estimates with discrete reachability checks to evaluate subgoal feasibility. DHP recursively constructs tree-structured plans by decomposing long-term goals into sequences of simpler subtasks, using a novel advantage estimation strategy that inherently rewards shorter plans and generalizes beyond training depths. In addition, to address the data efficiency challenge, we introduce an exploration strategy that generates targeted training examples for the planning modules without needing expert data. Experiments in 25-room navigation environments demonstrate 100%100\% success rate (vs 82%82\% baseline) and 7373-step average episode length (vs 158158-step baseline). The method also generalizes to momentum-based control tasks and requires only logN\log N steps for replanning. Theoretical analysis and ablations validate our design choices.

View on arXiv
Comments on this paper