ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.00913
23
2

β\betaβ-DQN: Improving Deep Q-Learning By Evolving the Behavior

3 January 2025
Hongming Zhang
Fengshuo Bai
Chenjun Xiao
Chao Gao
Bo Xu
Martin Müller
    OffRL
ArXivPDFHTML
Abstract

While many sophisticated exploration methods have been proposed, their lack of generality and high computational cost often lead researchers to favor simpler methods like ϵ\epsilonϵ-greedy. Motivated by this, we introduce β\betaβ-DQN, a simple and efficient exploration method that augments the standard DQN with a behavior function β\betaβ. This function estimates the probability that each action has been taken at each state. By leveraging β\betaβ, we generate a population of diverse policies that balance exploration between state-action coverage and overestimation bias correction. An adaptive meta-controller is designed to select an effective policy for each episode, enabling flexible and explainable exploration. β\betaβ-DQN is straightforward to implement and adds minimal computational overhead to the standard DQN. Experiments on both simple and challenging exploration domains show that β\betaβ-DQN outperforms existing baseline methods across a wide range of tasks, providing an effective solution for improving exploration in deep reinforcement learning.

View on arXiv
Comments on this paper