Asymptotically Optimal Pure Exploration for Infinite-Armed Bandits

We study pure exploration with infinitely many bandit arms generated i.i.d. from an unknown distribution. Our goal is to efficiently select a single high quality arm whose average reward is, with probability , within of being among the top -fraction of arms; this is a natural adaptation of the classical PAC guarantee for infinite action sets. We consider both the fixed confidence and fixed budget settings, aiming respectively for minimal expected and fixed sample complexity. For fixed confidence, we give an algorithm with expected sample complexity . This is optimal except for the factor, and the -dependence closes a quadratic gap in the literature. For fixed budget, we show the asymptotically optimal sample complexity as is to leading order. Equivalently, the optimal failure probability given exactly samples decays as , up to a factor inside the exponent. The constant depends explicitly on the problem parameters (including the unknown arm distribution) through a certain Fisher information distance. Even the strictly super-linear dependence on was not known and resolves a question of Grossman and Moshkovitz (FOCS 2016, SIAM Journal on Computing 2020).
View on arXiv