Model Selection for Average Reward RL with Application to Utility Maximization in Repeated Games

In standard RL, a learner attempts to learn an optimal policy for a Markov Decision Process whose structure (e.g. state space) is known. In online model selection, a learner attempts to learn an optimal policy for an MDP knowing only that it belongs to one of model classes of varying complexity. Recent results have shown that this can be feasibly accomplished in episodic online RL. In this work, we propose , an online model selection algorithm for the average reward RL setting. The regret of the algorithm is in where represents the complexity of the simplest well-specified model class and is its corresponding regret bound. This result shows that in average reward RL, like the episodic online RL, the additional cost of model selection scales only linearly in , the number of model classes. We apply to the interaction between a learner and an opponent in a two-player simultaneous general-sum repeated game, where the opponent follows a fixed unknown limited memory strategy. The learner's goal is to maximize its utility without knowing the opponent's utility function. The interaction is over rounds with no episode or discounting which leads us to measure the learner's performance by average reward regret. In this application, our algorithm enjoys an opponent-complexity-dependent regret in , where is the unknown memory limit of the opponent, is the unknown span of optimal bias induced by the opponent, and and are the number of actions for the learner and opponent respectively. We also show that the exponential dependency on is inevitable by proving a lower bound on the learner's regret.
View on arXiv