25
2

Demystifying Linear MDPs and Novel Dynamics Aggregation Framework

Abstract

In this work, we prove that, in linear MDPs, the feature dimension dd is lower bounded by S/US/U in order to aptly represent transition probabilities, where SS is the size of the state space and UU is the maximum size of directly reachable states. Hence, dd can still scale with SS depending on the direct reachability of the environment. To address this limitation of linear MDPs, we propose a novel structural aggregation framework based on dynamics, named as the "dynamics aggregation". For this newly proposed framework, we design a provably efficient hierarchical reinforcement learning algorithm in linear function approximation that leverages aggregated sub-structures. Our proposed algorithm exhibits statistical efficiency, achieving a regret of O~(dψ3/2H3/2NT) \tilde{O} ( d_{\psi}^{3/2} H^{3/2}\sqrt{ N T} ), where dψd_{\psi} represents the feature dimension of aggregated subMDPs and NN signifies the number of aggregated subMDPs. We establish that the condition dψ3Nd3d_{\psi}^3 N \ll d^{3} is readily met in most real-world environments with hierarchical structures, enabling a substantial improvement in the regret bound compared to LSVI-UCB, which enjoys a regret of O~(d3/2H3/2T) \tilde{O} (d^{3/2} H^{3/2} \sqrt{ T}). To the best of our knowledge, this work presents the first HRL algorithm with linear function approximation that offers provable guarantees.

View on arXiv
Comments on this paper