Reinforcement Learning with Delayed, Composite, and Partially Anonymous Reward

We investigate an infinite-horizon average reward Markov Decision Process (MDP) with delayed, composite, and partially anonymous reward feedback. The delay and compositeness of rewards mean that rewards generated as a result of taking an action at a given state are fragmented into different components, and they are sequentially realized at delayed time instances. The partial anonymity attribute implies that a learner, for each state, only observes the aggregate of past reward components generated as a result of different actions taken at that state, but realized at the observation instance. We propose an algorithm named to obtain a near-optimal policy for this setting and show that it achieves a regret bound of where and are the sizes of the state and action spaces, respectively, is the diameter of the MDP, is a parameter upper bounded by the maximum reward delay, and denotes the time horizon. This demonstrates the optimality of the bound in the order of , and an additive impact of the delay.
View on arXiv