ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.07425
  4. Cited By
Bandit Social Learning: Exploration under Myopic Behavior

Bandit Social Learning: Exploration under Myopic Behavior

15 February 2023
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Aleksandrs Slivkins
ArXivPDFHTML

Papers citing "Bandit Social Learning: Exploration under Myopic Behavior"

6 / 6 papers shown
Title
Exploration and Persuasion
Exploration and Persuasion
Aleksandrs Slivkins
18
11
0
22 Oct 2024
Can large language models explore in-context?
Can large language models explore in-context?
Akshay Krishnamurthy
Keegan Harris
Dylan J. Foster
Cyril Zhang
Aleksandrs Slivkins
LM&Ro
LLMAG
LRM
100
17
0
22 Mar 2024
Incentivized Learning in Principal-Agent Bandit Games
Incentivized Learning in Principal-Agent Bandit Games
Antoine Scheid
D. Tiapkin
Etienne Boursier
Aymeric Capitaine
El-Mahdi El-Mhamdi
Eric Moulines
Michael I. Jordan
Alain Durmus
24
6
0
06 Mar 2024
Replication-proof Bandit Mechanism Design with Bayesian Agents
Replication-proof Bandit Mechanism Design with Bayesian Agents
Seyed A. Esmaeili
Mohammadtaghi Hajiaghayi
Suho Shin
16
0
0
28 Dec 2023
Incentivized Collaboration in Active Learning
Incentivized Collaboration in Active Learning
Lee Cohen
Han Shao
FedML
15
0
0
01 Nov 2023
Be Greedy in Multi-Armed Bandits
Be Greedy in Multi-Armed Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
20
7
0
04 Jan 2021
1