Modeling Human Decision-making in Generalized Gaussian Multi-armed
Bandits
We develop plausible human decision-making models in three multi-armed bandit problems, namely, the standard multi-armed bandit problem, the multi-armed bandit problem with transition costs, and the multi-armed bandit problem on graphs. We focus on the case of Gaussian rewards and study these problems in a Bayesian setting. We develop the upper credible limit (UCL) algorithm for the standard multi-armed bandit problem, show that it achieves a logarithmic cumulative expected regret, and draw several connections between the proposed algorithm and human decision-making behavior. We model the prior knowledge of the human through the prior reward distribution in Bayesian inference, and elucidate the role of priors and the correlation structure among arms in the decision-making performance. We present empirical data from human experiments and show that human performance is efficiently captured by the proposed UCL algorithm with appropriate parameters. In the context of the multi-armed bandit problem with transition costs, and the multi-armed bandit problem on graphs, we extend the UCL algorithm to the block UCL algorithm and the graphical block UCL algorithm, respectively. We show that these algorithms also achieve a logarithmic cumulative expected regret and require a sub-logarithmic expected number of transitions among arms. We further illustrate the performance of these algorithms with numerical examples. Finally, we propose a formal framework for incorporating proposed human decision-making models in the design of mixed human-automata teams.
View on arXiv