17
17

Minimal Expected Regret in Linear Quadratic Control

Abstract

We consider the problem of online learning in Linear Quadratic Control systems whose state transition and state-action transition matrices AA and BB may be initially unknown. We devise an online learning algorithm and provide guarantees on its expected regret. This regret at time TT is upper bounded (i) by O~((du+dx)dxT)\widetilde{O}((d_u+d_x)\sqrt{d_xT}) when AA and BB are unknown, (ii) by O~(dx2log(T))\widetilde{O}(d_x^2\log(T)) if only AA is unknown, and (iii) by O~(dx(du+dx)log(T))\widetilde{O}(d_x(d_u+d_x)\log(T)) if only BB is unknown and under some mild non-degeneracy condition (dxd_x and dud_u denote the dimensions of the state and of the control input, respectively). These regret scalings are minimal in TT, dxd_x and dud_u as they match existing lower bounds in scenario (i) when dxdud_x\le d_u [SF20], and in scenario (ii) [lai1986]. We conjecture that our upper bounds are also optimal in scenario (iii) (there is no known lower bound in this setting). Existing online algorithms proceed in epochs of (typically exponentially) growing durations. The control policy is fixed within each epoch, which considerably simplifies the analysis of the estimation error on AA and BB and hence of the regret. Our algorithm departs from this design choice: it is a simple variant of certainty-equivalence regulators, where the estimates of AA and BB and the resulting control policy can be updated as frequently as we wish, possibly at every step. Quantifying the impact of such a constantly-varying control policy on the performance of these estimates and on the regret constitutes one of the technical challenges tackled in this paper.

View on arXiv
Comments on this paper