Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.12452
Cited By
v1
v2 (latest)
Bandits with many optimal arms
23 March 2021
R. D. Heide
J. Cheshire
Pierre Ménard
Alexandra Carpentier
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Bandits with many optimal arms"
12 / 12 papers shown
Title
Non-Stationary Lipschitz Bandits
Nicolas Nguyen
Solenne Gaucher
Claire Vernade
39
0
0
24 May 2025
Clustering Items through Bandit Feedback: Finding the Right Feature out of Many
Maximilian Graf
Victor Thuot
Nicolas Verzélen
69
0
0
14 Mar 2025
Finite-Sample Analysis of the Monte Carlo Exploring Starts Algorithm for Reinforcement Learning
Suei-Wen Chen
Keith Ross
Pierre Youssef
50
1
0
03 Oct 2024
Exploration Unbound
Dilip Arumugam
Wanqiao Xu
Benjamin Van Roy
73
0
0
16 Jul 2024
Active Ranking of Experts Based on their Performances in Many Tasks
E. Saad
Nicolas Verzélen
Alexandra Carpentier
36
3
0
05 Jun 2023
Asymptotically Optimal Pure Exploration for Infinite-Armed Bandits
Evelyn Xiao-Yue Gong
Mark Sellke
92
1
0
03 Jun 2023
Best Arm Identification for Stochastic Rising Bandits
Marco Mussi
Alessandro Montenegro
Francesco Trovò
Marcello Restelli
Alberto Maria Metelli
26
4
0
15 Feb 2023
Complexity Analysis of a Countable-armed Bandit Problem
Anand Kalvit
A. Zeevi
46
3
0
18 Jan 2023
AC-Band: A Combinatorial Bandit-Based Approach to Algorithm Configuration
Jasmin Brandt
Elias Schede
Viktor Bengs
Björn Haddenhorst
Eyke Hüllermeier
Kevin Tierney
44
5
0
01 Dec 2022
Revisiting Simple Regret: Fast Rates for Returning a Good Arm
Yao Zhao
Connor James Stephens
Csaba Szepesvári
Kwang-Sung Jun
85
14
0
30 Oct 2022
LEGS: Learning Efficient Grasp Sets for Exploratory Grasping
Letian Fu
Michael Danielczuk
Ashwin Balakrishna
Daniel S. Brown
Jeffrey Ichnowski
Eugen Solowjow
Ken Goldberg
96
12
0
29 Nov 2021
Bandits with Dynamic Arm-acquisition Costs
Anand Kalvit
A. Zeevi
33
3
0
23 Oct 2021
1