37
0

On the Importance of Reward Design in Reinforcement Learning-based Dynamic Algorithm Configuration: A Case Study on OneMax with (1+(λλ,λλ))-GA

Abstract

Dynamic Algorithm Configuration (DAC) has garnered significant attention in recent years, particularly in the prevalence of machine learning and deep learning algorithms. Numerous studies have leveraged the robustness of decision-making in Reinforcement Learning (RL) to address the optimization challenges associated with algorithm configuration. However, making an RL agent work properly is a non-trivial task, especially in reward design, which necessitates a substantial amount of handcrafted knowledge based on domain expertise. In this work, we study the importance of reward design in the context of DAC via a case study on controlling the population size of the (1+(λ,λ))(1+(\lambda,\lambda))-GA optimizing OneMax. We observed that a poorly designed reward can hinder the RL agent's ability to learn an optimal policy because of a lack of exploration, leading to both scalability and learning divergence issues. To address those challenges, we propose the application of a reward shaping mechanism to facilitate enhanced exploration of the environment by the RL agent. Our work not only demonstrates the ability of RL in dynamically configuring the (1+(λ,λ))(1+(\lambda,\lambda))-GA, but also confirms the advantages of reward shaping in the scalability of RL agents across various sizes of OneMax problems.

View on arXiv
@article{nguyen2025_2502.20265,
  title={ On the Importance of Reward Design in Reinforcement Learning-based Dynamic Algorithm Configuration: A Case Study on OneMax with (1+($λ$,$λ$))-GA },
  author={ Tai Nguyen and Phong Le and André Biedenkapp and Carola Doerr and Nguyen Dang },
  journal={arXiv preprint arXiv:2502.20265},
  year={ 2025 }
}
Comments on this paper