346

Deep Reinforcement Learning: A Convex Optimization Approach

Abstract

In this paper, we consider reinforcement learning of nonlinear systems with continuous state and action spaces. We present an episodic learning algorithm, where we for each episode use convex optimization to find a two-layer neural network approximation of the optimal QQ-function. The convex optimization approach guarantees that the weights calculated at each episode are optimal, with respect to the given sampled states and actions of the current episode. For stable nonlinear systems, we show that the algorithm converges and that the converging parameters of the trained neural network can be made arbitrarily close to the optimal neural network parameters. In particular, if the regularization parameter is ρ\rho and the time horizon is TT, then the parameters of the trained neural network converge to ww, where the distance between ww from the optimal parameters ww^\star is bounded by O(ρT1)\mathcal{O}(\rho T^{-1}). That is, when the number of episodes goes to infinity, there exists a constant CC such that \[\|w-w^\star\| \le C\cdot\frac{\rho}{T}.\] In particular, our algorithm converges arbitrarily close to the optimal neural network parameters as the time horizon increases or as the regularization parameter decreases.

View on arXiv
Comments on this paper