Learning Adversarial MDPs with Bandit Feedback and Unknown Transition

Abstract
We consider the problem of learning in episodic finite-horizon Markov decision processes with unknown transition function, bandit feedback, and adversarial losses. We propose an efficient algorithm that achieves regret with high probability, where is the horizon, is the number of states, is the number of actions, and is the number of episodes. To the best of our knowledge, our algorithm is the first one to ensure {} regret in this challenging setting. Our key technical contribution is to introduce an optimistic loss estimator that is inversely weighted by an .
View on arXivComments on this paper