Complexity of stochastic branch and bound methods for belief tree search
in Bayesian reinforcement learning
International Conference on Agents and Artificial Intelligence (ICAART), 2009
Abstract
There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to obtain stochastic lower and upper bounds on the value of each tree node. This enables us to use stochastic branch and bound algorithms to search the tree efficiently. This paper proposes two such algorithms and examines their complexity in this setting.
View on arXivComments on this paper
