29
0

Uncertainty of Joint Neural Contextual Bandit

Abstract

Contextual bandit learning is increasingly favored in modern large-scale recommendation systems. To better utlize the contextual information and available user or item features, the integration of neural networks have been introduced to enhance contextual bandit learning and has triggered significant interest from both academia and industry. However, a major challenge arises when implementing a disjoint neural contextual bandit solution in large-scale recommendation systems, where each item or user may correspond to a separate bandit arm. The huge number of items to recommend poses a significant hurdle for real world production deployment. This paper focuses on a joint neural contextual bandit solution which serves all recommending items in one single model. The output consists of a predicted reward μ\mu, an uncertainty σ\sigma and a hyper-parameter α\alpha which balances exploitation and exploration, e.g., μ+ασ\mu + \alpha \sigma. The tuning of the parameter α\alpha is typically heuristic and complex in practice due to its stochastic nature. To address this challenge, we provide both theoretical analysis and experimental findings regarding the uncertainty σ\sigma of the joint neural contextual bandit model. Our analysis reveals that α\alpha demonstrates an approximate square root relationship with the size of the last hidden layer FF and inverse square root relationship with the amount of training data NN, i.e., σFN\sigma \propto \sqrt{\frac{F}{N}}. The experiments, conducted with real industrial data, align with the theoretical analysis, help understanding model behaviors and assist the hyper-parameter tuning during both offline training and online deployment.

View on arXiv
Comments on this paper