Note on Follow-the-Perturbed-Leader in Combinatorial Semi-Bandit Problems

This paper studies the optimality and complexity of Follow-the-Perturbed-Leader (FTPL) policy in size-invariant combinatorial semi-bandit problems. Recently, Honda et al. (2023) and Lee et al. (2024) showed that FTPL achieves Best-of-Both-Worlds (BOBW) optimality in standard multi-armed bandit problems with Fréchet-type distributions. However, the optimality of FTPL in combinatorial semi-bandit problems remains unclear. In this paper, we consider the regret bound of FTPL with geometric resampling (GR) in size-invariant semi-bandit setting, showing that FTPL respectively achieves regret with Fréchet distributions, and the best possible regret bound of with Pareto distributions in adversarial setting. Furthermore, we extend the conditional geometric resampling (CGR) to size-invariant semi-bandit setting, which reduces the computational complexity from of original GR to without sacrificing the regret performance of FTPL.
View on arXiv@article{chen2025_2506.12490, title={ Note on Follow-the-Perturbed-Leader in Combinatorial Semi-Bandit Problems }, author={ Botao Chen and Junya Honda }, journal={arXiv preprint arXiv:2506.12490}, year={ 2025 } }