265

Privacy Preserving Online Convex Optimization

Abstract

In this paper, we consider the problem of preserving privacy for online convex programming (OCP), an important online learning paradigm. We use the notion of differential privacy as our privacy measure. For this problem, we distill two critical attributes a private OCP algorithm should have, namely, linearly decreasing sensitivity and sub-linear regret bound. Assuming these two conditions, we provide a general framework for OCP that preserves privacy while guaranteeing sub-linear regret bound. We then analyze Implicit Gradient Descent (IGD) algorithm for OCP in our framework, and show O~(T)\tilde O(\sqrt{T}) regret bound while preserving differential privacy for Lipschitz continuous, strongly convex cost functions. We also analyze the Generalized Infinitesimal Gradient Ascent (GIGA) method, a popular OCP algorithm, in our privacy preserving framework to obtain O~(T)\tilde O(\sqrt{T}) regret bound, albeit for a slightly more restricted class of strongly convex functions with Lipschitz continuous gradient. We then consider the practically important problem of online linear regression and show O(log1.5T)O(\log^{1.5} T) regret for the Follow The Leader (FTL) method, while preserving differential privacy. Finally, we empirically demonstrate effectiveness of our privacy preserving algorithms for the problems of online linear regression and online logistic regression.

View on arXiv
Comments on this paper