ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.11550
15
6

Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback

17 October 2023
Haolin Liu
Chen-Yu Wei
Julian Zimmert
ArXivPDFHTML
Abstract

We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback, without prior knowledge on transitions or access to simulators. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, ensures a regret of O~(K)\widetilde{\mathcal{O}}\left(\sqrt{K}\right)O(K​), where KKK is the number of episodes. This is the first result with the optimal KKK dependence in the considered setting. The second algorithm, which is based on the policy optimization framework, guarantees a regret of O~(K34)\widetilde{\mathcal{O}}\left(K^{\frac{3}{4}} \right)O(K43​) and is computationally efficient. Both our results significantly improve over the state-of-the-art: a computationally inefficient algorithm by Kong et al. [2023] with O~(K45+poly(1λmin⁡))\widetilde{\mathcal{O}}\left(K^{\frac{4}{5}}+poly\left(\frac{1}{\lambda_{\min}}\right) \right)O(K54​+poly(λmin​1​)) regret, for some problem-dependent constant λmin⁡\lambda_{\min}λmin​ that can be arbitrarily close to zero, and a computationally efficient algorithm by Sherman et al. [2023b] with O~(K67)\widetilde{\mathcal{O}}\left(K^{\frac{6}{7}} \right)O(K76​) regret.

View on arXiv
Comments on this paper