397

Locally Differentially Private (Contextual) Bandits Learning

Neural Information Processing Systems (NeurIPS), 2020
Abstract

We study locally differentially private (LDP) bandits learning in this paper. First, we propose simple black-box reduction frameworks that can solve a large family of context-free bandits learning problems with LDP guarantee. Based on our frameworks, we can improve previous best results for private bandits learning with one-point feedback, such as private Bandits Convex Optimization etc, and obtain the first results for Bandits Convex Optimization (BCO) with multi-point feedback under LDP. LDP guarantee and black-box nature make our frameworks more attractive in real applications compared with previous specifically designed and relatively weaker differentially private (DP) context-free bandits algorithms. Further, we also extend our algorithm to Generalized Linear Bandits with regret bound O~(T3/4/ε)\tilde{\mathcal{O}}(T^{3/4}/\varepsilon) under (ε,δ)(\varepsilon, \delta)-LDP which is conjectured to be optimal. Note given existing Ω(T)\Omega(T) lower bound for DP contextual linear bandits (Shariff&Sheffe,NeurIPS2018), our result shows a fundamental difference between LDP and DP contextual bandits learning.

View on arXiv
Comments on this paper