Enjoying Non-linearity in Multinomial Logistic Bandits
We consider the multinomial logistic bandit problem, a variant of generalized linear bandits where a learner interacts with an environment by selecting actions to maximize expected rewards based on probabilistic feedback from multiple possible outcomes. In the binary setting, recent work has focused on understanding the impact of the non-linearity of the logistic model (Faury et al., 2020; Abeille et al., 2021). They introduced a problem-dependent constant , that may be exponentially large in some problem parameters and which is captured by the derivative of the sigmoid function. It encapsulates the non-linearity and improves existing regret guarantees over rounds from to , where is the dimension of the parameter space. We extend their analysis to the multinomial logistic bandit framework, making it suitable for complex applications with more than two choices, such as reinforcement learning or recommender systems. To achieve this, we extend the definition of to the multinomial setting and propose an efficient algorithm that leverages the problem's non-linearity. Our method yields a problem-dependent regret bound of order $ \smash{\widetilde{\mathcal{O}}( Kd \sqrt{{T}/{\kappa_*}})} $, where is the number of actions and . This improves upon the best existing guarantees of order $ \smash{\widetilde{\mathcal{O}}( Kd \sqrt{T} )} $. Moreover, we provide a lower-bound, showing that our dependence on is optimal.
View on arXiv