Bandits with Dynamic Arm-acquisition Costs

We consider a bandit problem where at any time, the decision maker can add new arms to her consideration set. A new arm is queried at a cost from an "arm-reservoir" containing finitely many "arm-types," each characterized by a distinct mean reward. The cost of query reflects in a diminishing probability of the returned arm being optimal-typed, unbeknown to the decision maker; this feature encapsulates defining characteristics of a broad class of operations-inspired online learning problems, e.g., those arising in markets with churn, or those involving allocations subject to costly resource acquisition. The decision maker's goal is to maximize her cumulative expected payoffs over a sequence of pulls, oblivious to the statistical properties as well as types of the queried arms. We study two natural modes of endogeneity in the reservoir distribution and characterize (tight) necessary conditions for achievability of sub-linear regret in the problem. We also provide a granular analysis of the effects of endogeneity on the performance of algorithms tailored to the static version (sans endogeneity) of the problem. In doing so, we propose a new algorithm as well as provide refined analyses leading to tighter bounds for one from extant literature. We believe our findings may be of broader interest and guide future work in the area.
View on arXiv