99
v1v2 (latest)

(Locally) Differentially Private Combinatorial Semi-Bandits

International Conference on Machine Learning (ICML), 2020
Abstract

In this paper, we study Combinatorial Semi-Bandits (CSB) that is an extension of classic Multi-Armed Bandits (MAB) under Differential Privacy (DP) and stronger Local Differential Privacy (LDP) setting. Since the server receives more information from users in CSB, it usually causes additional dependence on the dimension of data, which is a notorious side-effect for privacy preserving learning. However for CSB under two common smoothness assumptions \cite{kveton2015tight,chen2016combinatorial}, we show it is possible to remove this side-effect. In detail, for BB_{\infty}-bounded smooth CSB under either ε\varepsilon-LDP or ε\varepsilon-DP, we prove the optimal regret bound is Θ(mB2lnTΔϵ2)\Theta(\frac{mB^2_{\infty}\ln T } {\Delta\epsilon^2}) or Θ~(mB2lnTΔϵ)\tilde{\Theta}(\frac{mB^2_{\infty}\ln T} { \Delta\epsilon}) respectively, where TT is time period, Δ\Delta is the gap of rewards and mm is the number of base arms, by proposing novel algorithms and matching lower bounds. For B1B_1-bounded smooth CSB under ε\varepsilon-DP, we also prove the optimal regret bound is Θ~(mKB12lnTΔϵ)\tilde{\Theta}(\frac{mKB^2_1\ln T} {\Delta\epsilon}) with both upper bound and lower bound, where KK is the maximum number of feedback in each round. All above results nearly match corresponding non-private optimal rates, which imply there is no additional price for (locally) differentially private CSB in above common settings.

View on arXiv
Comments on this paper