ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.06808
94
42
v1v2v3v4 (latest)

Analysis of Thompson Sampling for Gaussian Process Optimization in the Bandit Setting

18 May 2017
Kinjal Basu
Souvik Ghosh
ArXiv (abs)PDFHTML
Abstract

We consider the global optimization of a function over a continuous domain. At every evaluation attempt, we can observe the function at a chosen point in the domain and we reap the reward of the value observed. We assume that drawing these observations are expensive and noisy. We frame it as a continuum-armed bandit problem with a Gaussian Process prior on the function. In this regime, most algorithms have been developed to minimize some form of regret. Contrary to this popular norm, in this paper, we study the convergence of the sequential point xtx^txt to the global optimizer x∗x^*x∗ for the Thompson Sampling approach. Under some very mild assumptions, we show that the point sequence convergences to the true optimal.

View on arXiv
Comments on this paper