Towards Domain Adaptive Neural Contextual Bandits

Contextual bandit algorithms are essential for solving real-world decision making problems. In practice, collecting a contextual bandit's feedback from different domains may involve different costs. For example, measuring drug reaction from mice (as a source domain) and humans (as a target domain). Unfortunately, adapting a contextual bandit algorithm from a source domain to a target domain with distribution shift still remains a major challenge and largely unexplored. In this paper, we introduce the first general domain adaptation method for contextual bandits. Our approach learns a bandit model for the target domain by collecting feedback from the source domain. Our theoretical analysis shows that our algorithm maintains a sub-linear regret bound even adapting across domains. Empirical results show that our approach outperforms the state-of-the-art contextual bandit algorithms on real-world datasets.
View on arXiv@article{wang2025_2406.09564, title={ Towards Domain Adaptive Neural Contextual Bandits }, author={ Ziyan Wang and Xiaoming Huo and Hao Wang }, journal={arXiv preprint arXiv:2406.09564}, year={ 2025 } }