82
4

Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation

Abstract

We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space. Despite its significant benefits, incorporating the non-linear function raises substantial challenges in both statistical and computational efficiency. The best-known result of Hwang and Oh [2023] has achieved an O~(κ1dH2K)\widetilde{\mathcal{O}}(\kappa^{-1}dH^2\sqrt{K}) regret upper bound, where κ\kappa is a problem-dependent quantity, dd is the feature dimension, HH is the episode length, and KK is the number of episodes. However, we observe that κ1\kappa^{-1} exhibits polynomial dependence on the number of reachable states, which can be as large as the state space size in the worst case and thus undermines the motivation for function approximation. Additionally, their method requires storing all historical data and the time complexity scales linearly with the episode count, which is computationally expensive. In this work, we propose a statistically efficient algorithm that achieves a regret of O~(dH2K+κ1d2H2)\widetilde{\mathcal{O}}(dH^2\sqrt{K} + \kappa^{-1}d^2H^2), eliminating the dependence on κ1\kappa^{-1} in the dominant term for the first time. We then address the computational challenges by introducing an enhanced algorithm that achieves the same regret guarantee but with only constant cost. Finally, we establish the first lower bound for this problem, justifying the optimality of our results in dd and KK.

View on arXiv
Comments on this paper