120
96

The Price of Differential Privacy For Online Learning

Abstract

We design differentially private algorithms for the problem of online linear optimization in the full information and bandit settings with optimal O~(T)\tilde{O}(\sqrt{T}) regret bounds. In the full-information setting, our results demonstrate that ϵ\epsilon-differential privacy may be ensured for free -- in particular, the regret bounds scale as O(T)+O~(1ϵ)O(\sqrt{T})+\tilde{O}\left(\frac{1}{\epsilon}\right). For bandit linear optimization, and as a special case, for non-stochastic multi-armed bandits, the proposed algorithm achieves a regret of O~(1ϵT)\tilde{O}\left(\frac{1}{\epsilon}\sqrt{T}\right), while the previously known best regret bound was O~(1ϵT23)\tilde{O}\left(\frac{1}{\epsilon}T^{\frac{2}{3}}\right).

View on arXiv
Comments on this paper