13
6

Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback

Abstract

We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback, without prior knowledge on transitions or access to simulators. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, ensures a regret of O~(K)\widetilde{\mathcal{O}}\left(\sqrt{K}\right), where KK is the number of episodes. This is the first result with the optimal KK dependence in the considered setting. The second algorithm, which is based on the policy optimization framework, guarantees a regret of O~(K34)\widetilde{\mathcal{O}}\left(K^{\frac{3}{4}} \right) and is computationally efficient. Both our results significantly improve over the state-of-the-art: a computationally inefficient algorithm by Kong et al. [2023] with O~(K45+poly(1λmin))\widetilde{\mathcal{O}}\left(K^{\frac{4}{5}}+poly\left(\frac{1}{\lambda_{\min}}\right) \right) regret, for some problem-dependent constant λmin\lambda_{\min} that can be arbitrarily close to zero, and a computationally efficient algorithm by Sherman et al. [2023b] with O~(K67)\widetilde{\mathcal{O}}\left(K^{\frac{6}{7}} \right) regret.

View on arXiv
Comments on this paper