ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10121
  4. Cited By
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed
  Bandit with Many Arms
v1v2v3v4 (latest)

The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms

24 February 2020
Mohsen Bayati
N. Hamidi
Ramesh Johari
Khashayar Khosravi
ArXiv (abs)PDFHTML

Papers citing "The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms"

18 / 18 papers shown
Preference-based learning for news headline recommendation
Preference-based learning for news headline recommendation
Alexandre Bouras
A. Durand
Richard Khoury
133
0
0
31 May 2025
An Exploration-free Method for a Linear Stochastic Bandit Driven by a Linear Gaussian Dynamical System
An Exploration-free Method for a Linear Stochastic Bandit Driven by a Linear Gaussian Dynamical System
J. Gornet
Yilin Mo
Bruno Sinopoli
210
1
0
04 Apr 2025
Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure
Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure
Aleksandrs Slivkins
Yunzong Xu
Shiliang Zuo
916
3
0
06 Mar 2025
Tracking Most Significant Shifts in Infinite-Armed Bandits
Joe Suk
Jung-hun Kim
365
1
0
31 Jan 2025
Little Exploration is All You Need
Little Exploration is All You Need
Henry H.H. Chen
Jiaming Lu
208
0
0
26 Oct 2023
Approximate information maximization for bandit games
Approximate information maximization for bandit gamesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
A. Barbier–Chebbah
Christian L. Vestergaard
Jean-Baptiste Masson
Etienne Boursier
258
0
0
19 Oct 2023
Byzantine-Resilient Decentralized Multi-Armed Bandits
Byzantine-Resilient Decentralized Multi-Armed Bandits
Jingxuan Zhu
Alec Koppel
Alvaro Velasquez
Ji Liu
426
8
0
11 Oct 2023
Discounted Thompson Sampling for Non-Stationary Bandit Problems
Discounted Thompson Sampling for Non-Stationary Bandit Problems
Han Qi
Yue Wang
Li Zhu
204
4
0
18 May 2023
Bandit Social Learning: Exploration under Myopic Behavior
Bandit Social Learning: Exploration under Myopic Behavior
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Aleksandrs Slivkins
852
7
0
15 Feb 2023
A survey on multi-player bandits
A survey on multi-player banditsJournal of machine learning research (JMLR), 2022
Etienne Boursier
Vianney Perchet
314
28
0
29 Nov 2022
Discover Life Skills for Planning with Bandits via Observing and
  Learning How the World Works
Discover Life Skills for Planning with Bandits via Observing and Learning How the World Works
Tin Lai
187
4
0
17 Jul 2022
Improving Sequential Query Recommendation with Immediate User Feedback
Improving Sequential Query Recommendation with Immediate User Feedback
Shameem Puthiya Parambath
Christos Anagnostopoulos
Roderick Murray-Smith
155
1
0
12 May 2022
Auto-Transfer: Learning to Route Transferrable Representations
Auto-Transfer: Learning to Route Transferrable RepresentationsInternational Conference on Learning Representations (ICLR), 2022
K. Murugesan
Vijay Sadashivaiah
Ronny Luss
Karthikeyan Shanmugam
Pin-Yu Chen
Amit Dhurandhar
AAML
405
6
0
02 Feb 2022
Rotting Infinitely Many-armed Bandits
Rotting Infinitely Many-armed BanditsInternational Conference on Machine Learning (ICML), 2022
Jung-hun Kim
Milan Vojnović
Se-Young Yun
277
5
0
31 Jan 2022
Max-Utility Based Arm Selection Strategy For Sequential Query
  Recommendations
Max-Utility Based Arm Selection Strategy For Sequential Query RecommendationsAsian Conference on Machine Learning (ACML), 2021
Shameem Puthiya Parambath
Christos Anagnostopoulos
R. Murray-Smith
Sean MacAvaney
E. Zervas
190
5
0
31 Aug 2021
Parallelizing Contextual Bandits
Parallelizing Contextual Bandits
Jeffrey Chan
Aldo Pacchiano
Nilesh Tripuraneni
Yun S. Song
Peter L. Bartlett
Michael I. Jordan
272
3
0
21 May 2021
Be Greedy in Multi-Armed Bandits
Be Greedy in Multi-Armed Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
572
11
0
04 Jan 2021
A General Theory of the Stochastic Linear Bandit and Its Applications
A General Theory of the Stochastic Linear Bandit and Its Applications
N. Hamidi
Mohsen Bayati
472
3
0
12 Feb 2020
1
Page 1 of 1