313

DP-ADMM: ADMM-based Distributed Learning with Differential Privacy

Abstract

Privacy-preserving distributed machine learning has become more important than ever due to the high demand of large-scale data processing. This paper focuses on a class of machine learning problems that can be formulated as regularized empirical risk minimization, and develops a privacy-preserving learning approach to such problems. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms to provide differential privacy guarantee. However, simply combining ADMM and local randomization mechanisms would result in a nonconvergent algorithm with poor performance even under moderate privacy guarantees. Besides, this intuitive approach requires a strong assumption that the objective functions of the learning problems should be differentiable and strongly convex. To address these concerns, we propose an improved ADMM-based Differentially Private distributed learning algorithm, DP-ADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moments accountant method to bound the total privacy loss. Our theoretical analysis shows that DP-ADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves a convergence rate of O(1/t)O(1/\sqrt{t}), where tt is the number of iterations. Our evaluations demonstrate that our approach can achieve good convergence and accuracy with moderate privacy guarantee.

View on arXiv
Comments on this paper