ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00810
38
0

Minimax Optimal Reinforcement Learning with Quasi-Optimism

2 March 2025
Harin Lee
Min-hwan Oh
    OffRL
ArXivPDFHTML
Abstract

In our quest for a reinforcement learning (RL) algorithm that is both practical and provably optimal, we introduce EQO (Exploration via Quasi-Optimism). Unlike existing minimax optimal approaches, EQO avoids reliance on empirical variances and employs a simple bonus term proportional to the inverse of the state-action visit count. Central to EQO is the concept of quasi-optimism, where estimated values need not be fully optimistic, allowing for a simpler yet effective exploration strategy. The algorithm achieves the sharpest known regret bound for tabular RL under the mildest assumptions, proving that fast convergence can be attained with a practical and computationally efficient approach. Empirical evaluations demonstrate that EQO consistently outperforms existing algorithms in both regret performance and computational efficiency, providing the best of both theoretical soundness and practical effectiveness.

View on arXiv
@article{lee2025_2503.00810,
  title={ Minimax Optimal Reinforcement Learning with Quasi-Optimism },
  author={ Harin Lee and Min-hwan Oh },
  journal={arXiv preprint arXiv:2503.00810},
  year={ 2025 }
}
Comments on this paper