ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.05449
14
759

Minimax Regret Bounds for Reinforcement Learning

16 March 2017
M. G. Azar
Ian Osband
Rémi Munos
ArXivPDFHTML
Abstract

We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of O~(HSAT+H2S2A+HT)\tilde{O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})O~(HSAT​+H2S2A+HT​) where HHH is the time horizon, SSS the number of states, AAA the number of actions and TTT the number of time-steps. This result improves over the best previous known bound O~(HSAT)\tilde{O}(HS \sqrt{AT})O~(HSAT​) achieved by the UCRL2 algorithm of Jaksch et al., 2010. The key significance of our new results is that when T≥H3S3AT\geq H^3S^3AT≥H3S3A and SA≥HSA\geq HSA≥H, it leads to a regret of O~(HSAT)\tilde{O}(\sqrt{HSAT})O~(HSAT​) that matches the established lower bound of Ω(HSAT)\Omega(\sqrt{HSAT})Ω(HSAT​) up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in SSS), and we define Bernstein-based "exploration bonuses" that use the empirical variance of the estimated values at the next states (to improve scaling in HHH).

View on arXiv
Comments on this paper