ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00400
  4. Cited By
Minimal Exploration in Structured Stochastic Bandits

Minimal Exploration in Structured Stochastic Bandits

1 November 2017
Richard Combes
Stefan Magureanu
Alexandre Proutière
ArXivPDFHTML

Papers citing "Minimal Exploration in Structured Stochastic Bandits"

10 / 10 papers shown
Title
A Complete Characterization of Learnability for Stochastic Noisy Bandits
A Complete Characterization of Learnability for Stochastic Noisy Bandits
Steve Hanneke
Kun Wang
35
0
0
20 Jan 2025
Causally Abstracted Multi-armed Bandits
Causally Abstracted Multi-armed Bandits
Fabio Massimo Zennaro
Nicholas Bishop
Joel Dyer
Yorgos Felekis
Anisoara Calinescu
Michael Wooldridge
Theodoros Damoulas
16
2
0
26 Apr 2024
Quantum contextual bandits and recommender systems for quantum data
Quantum contextual bandits and recommender systems for quantum data
Shrigyan Brahmachari
Josep Lumbreras
Marco Tomamichel
15
3
0
31 Jan 2023
SPEED: Experimental Design for Policy Evaluation in Linear
  Heteroscedastic Bandits
SPEED: Experimental Design for Policy Evaluation in Linear Heteroscedastic Bandits
Subhojyoti Mukherjee
Qiaomin Xie
Josiah P. Hanna
R. Nowak
OffRL
22
5
0
29 Jan 2023
Interactive Recommendations for Optimal Allocations in Markets with
  Constraints
Interactive Recommendations for Optimal Allocations in Markets with Constraints
Y. E. Erginbas
Soham R. Phade
K. Ramchandran
12
2
0
08 Jul 2022
Truncated LinUCB for Stochastic Linear Bandits
Truncated LinUCB for Stochastic Linear Bandits
Yanglei Song
Meng zhou
33
0
0
23 Feb 2022
Fast online inference for nonlinear contextual bandit based on
  Generative Adversarial Network
Fast online inference for nonlinear contextual bandit based on Generative Adversarial Network
Yun-Da Tsai
Shou-De Lin
32
5
0
17 Feb 2022
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits
  with Super Heavy-Tailed Payoffs
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
Han Zhong
Jiayi Huang
Lin F. Yang
Liwei Wang
17
7
0
26 Oct 2021
Multi-armed Bandit Algorithm against Strategic Replication
Multi-armed Bandit Algorithm against Strategic Replication
Suho Shin
Seungjoon Lee
Jungseul Ok
16
4
0
23 Oct 2021
Mixture Martingales Revisited with Applications to Sequential Tests and
  Confidence Intervals
Mixture Martingales Revisited with Applications to Sequential Tests and Confidence Intervals
E. Kaufmann
Wouter M. Koolen
8
117
0
28 Nov 2018
1