ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.03290
11
12

Maillard Sampling: Boltzmann Exploration Done Optimally

5 November 2021
Jieming Bian
Kwang-Sung Jun
ArXivPDFHTML
Abstract

The PhD thesis of Maillard (2013) presents a rather obscure algorithm for the KKK-armed bandit problem. This less-known algorithm, which we call Maillard sampling (MS), computes the probability of choosing each arm in a \textit{closed form}, which is not true for Thompson sampling, a widely-adopted bandit algorithm in the industry. This means that the bandit-logged data from running MS can be readily used for counterfactual evaluation, unlike Thompson sampling. Motivated by such merit, we revisit MS and perform an improved analysis to show that it achieves both the asymptotical optimality and KTlog⁡T\sqrt{KT\log{T}}KTlogT​ minimax regret bound where TTT is the time horizon, which matches the known bounds for asymptotically optimal UCB. %'s performance. We then propose a variant of MS called MS+^++ that improves its minimax bound to KTlog⁡K\sqrt{KT\log{K}}KTlogK​. MS+^++ can also be tuned to be aggressive (i.e., less exploration) without losing the asymptotic optimality, a unique feature unavailable from existing bandit algorithms. Our numerical evaluation shows the effectiveness of MS+^++.

View on arXiv
Comments on this paper