289

Control-Oriented Learning on the Fly

Abstract

This paper focuses on developing a strategy for satisfying basic control objectives for systems whose dynamics are almost entirely unknown. This situation arises naturally in a scenario where a system undergoes a critical failure, thus significantly changing its dynamics. In that case, it is imperative to retain the ability to satisfy basic control objectives in order to avert an imminent catastrophe. A prime example of such an objective is the reach-avoid problem, where a system needs to move to a certain state in a constrained state space. To deal with significant restrictions on our knowledge of system dynamics, we develop a theory of myopic control. The primary goal of myopic control is to, at any given time, optimize the current direction of the system trajectory, given solely the limited information obtained about the system until that time. Building upon this notion, we propose a control algorithm which simultaneously uses small perturbations in the control effort to learn local system dynamics while moving in the direction which seems to be optimal based on previously obtained knowledge. We prove that the algorithm results in a trajectory that is nearly optimal in the myopic sense, i.e., it is moving in a direction that seems to be nearly the best at the given time. We provide hard bounds for suboptimality of the proposed algorithm, and show that the algorithm results in a control law that is arbitrarily close to a myopically-optimal control law. We verify the usefulness of the proposed algorithm on a number of simulations based on the running example of a damaged aircraft seeking to land, as well as on a classical example of a Van der Pol oscillator.

View on arXiv
Comments on this paper