Staged Multi-armed Bandits
- OffRL

In conventional multi-armed bandits (MAB) and other reinforcement learning methods, the learner sequentially chooses actions and obtains a reward (which can be possibly missing, delayed or erroneous) after each taken action. This reward is then used by the learner to improve its future decisions. However, in numerous applications, ranging from personalized patient treatment to personalized web-based education, the learner does not obtain rewards after each action, but only after sequences of actions are taken, intermediate feedbacks are observed, and a final decision is made based on which a reward is obtained. In this paper, we introduce a new class of reinforcement learning methods which can operate in such settings. We refer to this class as staged multi-armed bandits (S-MAB). S-MAB proceeds in rounds, each composed of several stages; in each stage, the learner chooses an action and observes a feedback signal. Upon each action selection a feedback signal is observed, whilst the reward of the selected sequence of actions is only revealed after the learner selects a stop action that ends the current round. The reward of the round depends both on the sequence of actions and the sequence of observed feedbacks. The goal of the learner is to maximize its total expected reward over all rounds by learning to choose the best sequence of actions based on the feedback it gets about these actions. First, we define an oracle benchmark, which sequentially selects the actions that maximize the expected immediate reward. This benchmark is known to be approximately optimal when the reward sequence associated with the selected actions is adaptive submodular. Then, we propose our online learning algorithm, for which we prove that the regret is logarithmic in the number of rounds and linear in the number of stages with respect to the oracle benchmark.
View on arXiv