The Survival Bandit Problem

We study the survival bandit problem, a variant of the multi-armed bandit problem with a constraint on the cumulative reward; at each time step, the agent receives a reward in [-1, 1] and if the cumulative reward becomes lower than a preset threshold, the procedure stops, and this phenomenon is called ruin. To our knowledge, this is the first paper studying a framework where the ruin might occur but not always. We first discuss that no policy can achieve a sublinear regret as defined in the standard multi-armed bandit problem, because a single pull of an arm may increase significantly the risk of ruin. Instead, we establish the framework of Pareto-optimal policies, which is a class of policies whose cumulative reward for some instance cannot be improved without sacrificing that for another instance. To this end, we provide tight lower bounds on the probability of ruin, as well as matching policies called EXPLOIT. Finally, using a doubling trick over an EXPLOIT policy, we display a Pareto-optimal policy in the case of {-1, 0, 1} rewards, giving an answer to the open problem by Perotto et al. (2019).
View on arXiv