Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual
BanditsNeural Information Processing Systems (NeurIPS), 2023 |
Adversarial Sleeping Bandit Problems with Multiple Plays: Algorithm and
Ranking ApplicationACM Conference on Recommender Systems (RecSys), 2023 |
One Arrow, Two Kills: An Unified Framework for Achieving Optimal Regret
Guarantees in Sleeping BanditsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022 |
Walk for Learning: A Random Walk Approach for Federated Learning from
Heterogeneous DataIEEE Journal on Selected Areas in Communications (JSAC), 2022 |
Non-Stationary Bandits under Recharging Payoffs: Improved Planning with
Sublinear RegretNeural Information Processing Systems (NeurIPS), 2022 |
Adversarial Dueling BanditsInternational Conference on Machine Learning (ICML), 2020 |