ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10121
  4. Cited By
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed
  Bandit with Many Arms

The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms

24 February 2020
Mohsen Bayati
N. Hamidi
Ramesh Johari
Khashayar Khosravi
ArXivPDFHTML

Papers citing "The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms"

7 / 7 papers shown
Title
Discounted Thompson Sampling for Non-Stationary Bandit Problems
Discounted Thompson Sampling for Non-Stationary Bandit Problems
Han Qi
Yue Wang
Li Zhu
27
4
0
18 May 2023
Bandit Social Learning: Exploration under Myopic Behavior
Bandit Social Learning: Exploration under Myopic Behavior
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Aleksandrs Slivkins
21
4
0
15 Feb 2023
A survey on multi-player bandits
A survey on multi-player bandits
Etienne Boursier
Vianney Perchet
11
12
0
29 Nov 2022
Rotting Infinitely Many-armed Bandits
Rotting Infinitely Many-armed Bandits
Jung-hun Kim
Milan Vojnović
Se-Young Yun
9
4
0
31 Jan 2022
Max-Utility Based Arm Selection Strategy For Sequential Query
  Recommendations
Max-Utility Based Arm Selection Strategy For Sequential Query Recommendations
S. P. Parambath
Christos Anagnostopoulos
R. Murray-Smith
Sean MacAvaney
E. Zervas
9
5
0
31 Aug 2021
Be Greedy in Multi-Armed Bandits
Be Greedy in Multi-Armed Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
28
8
0
04 Jan 2021
On Bayesian index policies for sequential resource allocation
On Bayesian index policies for sequential resource allocation
E. Kaufmann
23
84
0
06 Jan 2016
1