16
3

Generalized Linear Bandits with Limited Adaptivity

Abstract

We study the generalized linear contextual bandit problem within the constraints of limited adaptivity. In this paper, we present two algorithms, B-GLinCB\texttt{B-GLinCB} and RS-GLinCB\texttt{RS-GLinCB}, that address, respectively, two prevalent limited adaptivity settings. Given a budget MM on the number of policy updates, in the first setting, the algorithm needs to decide upfront MM rounds at which it will update its policy, while in the second setting it can adaptively perform MM policy updates during its course. For the first setting, we design an algorithm B-GLinCB\texttt{B-GLinCB}, that incurs O~(T)\tilde{O}(\sqrt{T}) regret when M=Ω(loglogT)M = \Omega( \log{\log T} ) and the arm feature vectors are generated stochastically. For the second setting, we design an algorithm RS-GLinCB\texttt{RS-GLinCB} that updates its policy O~(log2T)\tilde{O}(\log^2 T) times and achieves a regret of O~(T)\tilde{O}(\sqrt{T}) even when the arm feature vectors are adversarially generated. Notably, in these bounds, we manage to eliminate the dependence on a key instance dependent parameter κ\kappa, that captures non-linearity of the underlying reward model. Our novel approach for removing this dependence for generalized linear contextual bandits might be of independent interest.

View on arXiv
@article{sawarni2025_2404.06831,
  title={ Generalized Linear Bandits with Limited Adaptivity },
  author={ Ayush Sawarni and Nirjhar Das and Siddharth Barman and Gaurav Sinha },
  journal={arXiv preprint arXiv:2404.06831},
  year={ 2025 }
}
Comments on this paper