Differentially Private Online-to-Batch for Smooth Losses

Abstract
We develop a new reduction that converts any online convex optimization algorithm suffering regret into an -differentially private stochastic convex optimization algorithm with the optimal convergence rate on smooth losses in linear time, forming a direct analogy to the classical non-private "online-to-batch" conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.
View on arXivComments on this paper