47
2

Reinforcement Learning-based Self-adaptive Differential Evolution through Automated Landscape Feature Learning

Abstract

Recently, Meta-Black-Box-Optimization (MetaBBO) methods significantly enhance the performance of traditional black-box optimizers through meta-learning flexible and generalizable meta-level policies that excel in dynamic algorithm configuration (DAC) tasks within the low-level optimization, reducing the expertise required to adapt optimizers for novel optimization tasks. Though promising, existing MetaBBO methods heavily rely on human-crafted feature extraction approach to secure learning effectiveness. To address this issue, this paper introduces a novel MetaBBO method that supports automated feature learning during the meta-learning process, termed as RLDE-AFL, which integrates a learnable feature extraction module into a reinforcement learning-based DE method to learn both the feature encoding and meta-level policy. Specifically, we design an attention-based neural network with mantissa-exponent based embedding to transform the solution populations and corresponding objective values during the low-level optimization into expressive landscape features. We further incorporate a comprehensive algorithm configuration space including diverse DE operators into a reinforcement learning-aided DAC paradigm to unleash the behavior diversity and performance of the proposed RLDE-AFL. Extensive benchmark results show that co-training the proposed feature learning module and DAC policy contributes to the superior optimization performance of RLDE-AFL to several advanced DE methods and recent MetaBBO baselines over both synthetic and realistic BBO scenarios. The source codes of RLDE-AFL are available atthis https URL.

View on arXiv
@article{guo2025_2503.18061,
  title={ Reinforcement Learning-based Self-adaptive Differential Evolution through Automated Landscape Feature Learning },
  author={ Hongshu Guo and Sijie Ma and Zechuan Huang and Yuzhi Hu and Zeyuan Ma and Xinglin Zhang and Yue-Jiao Gong },
  journal={arXiv preprint arXiv:2503.18061},
  year={ 2025 }
}
Comments on this paper