ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07562
  4. Cited By
Explicit Best Arm Identification in Linear Bandits Using No-Regret
  Learners

Explicit Best Arm Identification in Linear Bandits Using No-Regret Learners

13 June 2020
Mohammadi Zaki
Avinash Mohan
Aditya Gopalan
ArXivPDFHTML

Papers citing "Explicit Best Arm Identification in Linear Bandits Using No-Regret Learners"

10 / 10 papers shown
Title
Towards Optimal and Efficient Best Arm Identification in Linear Bandits
Towards Optimal and Efficient Best Arm Identification in Linear Bandits
Mohammadi Zaki
Avinash Mohan
Aditya Gopalan
44
12
0
05 Nov 2019
Non-Asymptotic Pure Exploration by Solving Games
Non-Asymptotic Pure Exploration by Solving Games
Rémy Degenne
Wouter M. Koolen
Pierre Ménard
33
101
0
25 Jun 2019
Sequential Experimental Design for Transductive Linear Bandits
Sequential Experimental Design for Transductive Linear Bandits
Tanner Fiez
Lalit P. Jain
Kevin Jamieson
Lillian J. Ratliff
48
105
0
20 Jun 2019
Polynomial-time Algorithms for Multiple-arm Identification with
  Full-bandit Feedback
Polynomial-time Algorithms for Multiple-arm Identification with Full-bandit Feedback
Yuko Kuroki
Liyuan Xu
Atsushi Miyauchi
Junya Honda
Masashi Sugiyama
47
17
0
27 Feb 2019
Fully adaptive algorithm for pure exploration in linear bandits
Fully adaptive algorithm for pure exploration in linear bandits
Liyuan Xu
Junya Honda
Masashi Sugiyama
41
84
0
16 Oct 2017
Optimal Best Arm Identification with Fixed Confidence
Optimal Best Arm Identification with Fixed Confidence
Aurélien Garivier
E. Kaufmann
84
341
0
15 Feb 2016
Best-Arm Identification in Linear Bandits
Best-Arm Identification in Linear Bandits
Marta Soare
A. Lazaric
Rémi Munos
53
178
0
22 Sep 2014
On the Complexity of Best Arm Identification in Multi-Armed Bandit
  Models
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models
E. Kaufmann
Olivier Cappé
Aurélien Garivier
140
1,021
0
16 Jul 2014
Follow the Leader If You Can, Hedge If You Must
Follow the Leader If You Can, Hedge If You Must
S. D. Rooij
T. Erven
Peter Grünwald
Wouter M. Koolen
141
177
0
03 Jan 2013
Gaussian Process Optimization in the Bandit Setting: No Regret and
  Experimental Design
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
Niranjan Srinivas
Andreas Krause
Sham Kakade
Matthias Seeger
139
1,616
0
21 Dec 2009
1