Private Convex Optimization via Exponential Mechanism

In this paper, we study private optimization problems for non-smooth convex functions on . We show that modifying the exponential mechanism by adding an regularizer to and sampling from recovers both the known optimal empirical risk and population loss under -DP. Furthermore, we show how to implement this mechanism using queries to for the DP-SCO where is the number of samples/users and is the ambient dimension. We also give a (nearly) matching lower bound on the number of evaluation queries. Our results utilize the following tools that are of independent interest: (1) We prove Gaussian Differential Privacy (GDP) of the exponential mechanism if the loss function is strongly convex and the perturbation is Lipschitz. Our privacy bound is \emph{optimal} as it includes the privacy of Gaussian mechanism as a special case and is proved using the isoperimetric inequality for strongly log-concave measures. (2) We show how to sample from for -Lipschitz with error in total variation (TV) distance using unbiased queries to . This is the first sampler whose query complexity has \emph{polylogarithmic dependence} on both dimension and accuracy .
View on arXiv