ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03580
52
42

A Model Selection Approach for Corruption Robust Reinforcement Learning

31 December 2024
Chen-Yu Wei
Christoph Dann
Julian Zimmert
ArXivPDFHTML
Abstract

We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of O~(min⁡{1Δ,T}+C)\widetilde{\mathcal{O}}(\min\{\frac{1}{\Delta}, \sqrt{T}\}+C)O(min{Δ1​,T​}+C) where TTT is the number of episodes, CCC is the total amount of corruption, and Δ\DeltaΔ is the reward gap between the best and the second-best policy. This is the first worst-case optimal bound achieved without knowledge of CCC, improving previous results of Lykouris et al. (2021); Chen et al. (2021); Wu et al. (2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of O~((1+C)T)\widetilde{\mathcal{O}}(\sqrt{(1+C)T})O((1+C)T​), and another computationally inefficient one with O~(T+C)\widetilde{\mathcal{O}}(\sqrt{T}+C)O(T​+C), improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al. (2021b). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results.

View on arXiv
Comments on this paper