Learning to Control DC Motor for Micromobility in Real Time with
Reinforcement Learning
Autonomous micromobility has been attracting the attention of both researchers and practitioners in recent years. A key component of many micro-transport vehicles is the DC motor, which represents a complex dynamical system (continuous and non-linear). Learning to quickly control such a system in the presence of disturbances and uncertainties is often desired not only for micromobility but also in other industrial and robotic applications. Techniques to accomplish this task usually rely on a mathematical system model, which is often insufficient to anticipate the effects of time-varying and interrelated sources of non-linearities. While some model-free approaches have been successful at this task, they rely on massive interactions with the system and are trained in specialized hardware to fit a highly parameterized controller. In this work, we learn to control one such dynamical system, i.e., steering position control of a DC motor, via sample-efficient reinforcement learning. Using data collected from hardware interactions in the real world, we additionally build a simulator to experiment with a wide range of parameters and learning strategies. Using the parameters found in simulation, we successfully learn an effective control policy in one minute and 53 seconds on a simulation and in 10 minutes and 35 seconds on a physical system.
View on arXiv