90
104

Learning Adversarial MDPs with Bandit Feedback and Unknown Transition

Abstract

We consider the problem of learning in episodic finite-horizon Markov decision processes with unknown transition function, bandit feedback, and adversarial losses. We propose an efficient algorithm that achieves O~(LX2AT)\mathcal{\tilde{O}}(L|X|^2\sqrt{|A|T}) regret with high probability, where LL is the horizon, X|X| is the number of states, A|A| is the number of actions, and TT is the number of episodes. To the best of our knowledge, our algorithm is the first one to ensure {O~(T)\mathcal{\tilde{O}}(\sqrt{T})} regret in this challenging setting. Our key technical contribution is to introduce an optimistic loss estimator that is inversely weighted by an upper occupancy bound\textit{upper occupancy bound}.

View on arXiv
Comments on this paper