Near-Optimal Pure Exploration in Matrix Games: A Generalization of
Stochastic Bandits & Dueling BanditsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023 |
Identifying Copeland Winners in Dueling Bandits with IndifferencesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023 |
Finding Optimal Arms in Non-stochastic Combinatorial Bandits with
Semi-bandit Feedback and Finite BudgetNeural Information Processing Systems (NeurIPS), 2022 |
Semi-verified PAC Learning from the CrowdInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021 |