Approximate information maximization for bandit gamesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023 |
A survey on multi-player banditsJournal of machine learning research (JMLR), 2022 |
Auto-Transfer: Learning to Route Transferrable RepresentationsInternational Conference on Learning Representations (ICLR), 2022 |
Rotting Infinitely Many-armed BanditsInternational Conference on Machine Learning (ICML), 2022 |
Max-Utility Based Arm Selection Strategy For Sequential Query
RecommendationsAsian Conference on Machine Learning (ACML), 2021 |