178

Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation

Foundations of Genetic Algorithms (FOGA), 2020
Abstract

In stochastic optimization, particularly in evolutionary computation and reinforcement learning, the optimization of a function f:ΩRf: \Omega \to \mathbb{R} is often addressed through optimizing a so-called relaxation θΘEθ(f)\theta \in \Theta \mapsto \mathbb{E}_\theta(f) of ff, where Θ\Theta resembles the parameters of a family of probability measures on Ω\Omega. We investigate the structure of such relaxations by means of measure theory and Fourier analysis, enabling us to shed light on the success of many stochastic optimization methods. The main structural traits we derive, and that allow fast and reliable optimization of relaxations, are the resemblance of optimal values of ff, Lipschitzness of gradients, and convexity.

View on arXiv
Comments on this paper