61
0

Staged Multi-armed Bandits

Abstract

In this paper we introduce a new class of online learning problems called staged multi-armed bandit (S-MAB) problems. Unlike conventional multi-armed bandit (MAB) problems in which the reward is observed exactly after each taken action, the S-MAB problem proceeds in rounds composed of stages. In each round, the learner proceeds in stages by sequentially selecting from a set of available actions. Upon each action selection a feedback signal is observed, whilst the reward of the selected sequence of actions is only revealed after a stop action that ends the current round. The reward of the round depends both on the sequence of actions and the sequence of observed feedbacks. The goal of the learner is to maximize its total expected reward over all rounds by learning to choose the best sequence of actions based on the feedback it gets about these actions. First, we define an oracle benchmark, which sequentially selects the actions that maximize the expected immediate reward. This benchmark is known to be approximately optimal when the reward sequence associated with the selected actions is adaptive submodular. Then, we propose our online learning algorithm whose regret is logarithmic in the number of rounds and linear in the number of stages with respect to the oracle benchmark. The proposed framework can be applied to many problems including patient treatment, web-based education and Big-Data streaming application scheduling.

View on arXiv
Comments on this paper