ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.06321
  4. Cited By
Open Problem: Tight Bounds for Kernelized Multi-Armed Bandits with
  Bernoulli Rewards

Open Problem: Tight Bounds for Kernelized Multi-Armed Bandits with Bernoulli Rewards

8 July 2024
Marco Mussi
Simone Drago
Alberto Maria Metelli
ArXivPDFHTML

Papers citing "Open Problem: Tight Bounds for Kernelized Multi-Armed Bandits with Bernoulli Rewards"

1 / 1 papers shown
Title
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
Junghyun Lee
Se-Young Yun
Kwang-Sung Jun
40
4
0
19 Jul 2024
1