
Title |
|---|
![]() The Nah Bandit: Modeling User Non-compliance in Recommendation SystemsIEEE Transactions on Control of Network Systems (TCNS), 2024 |
![]() Thompson sampling for zero-inflated count outcomes with an application
to the Drink Less mobile health studyAnnals of Applied Statistics (AOAS), 2023 |
![]() Revisiting Weighted Strategy for Non-stationary Parametric BanditsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023 |
![]() Mixed-Effect Thompson SamplingInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022 |
![]() Reinforcement Learning in Modern Biostatistics: Constructing Optimal
Adaptive InterventionsInternational Statistical Review (ISR), 2022 |
![]() Metadata-based Multi-Task Bandits with Bayesian Hierarchical ModelsNeural Information Processing Systems (NeurIPS), 2021 |
![]() Spoiled for Choice? Personalized Recommendation for Healthcare
Decisions: A Multi-Armed Bandit ApproachInformation systems research (ISR), 2020 |