25
8

(Nearly) Optimal Private Linear Regression via Adaptive Clipping

Abstract

We study the problem of differentially private linear regression where each data point is sampled from a fixed sub-Gaussian style distribution. We propose and analyze a one-pass mini-batch stochastic gradient descent method (DP-AMBSSGD) where points in each iteration are sampled without replacement. Noise is added for DP but the noise standard deviation is estimated online. Compared to existing (ϵ,δ)(\epsilon, \delta)-DP techniques which have sub-optimal error bounds, DP-AMBSSGD is able to provide nearly optimal error bounds in terms of key parameters like dimensionality dd, number of points NN, and the standard deviation σ\sigma of the noise in observations. For example, when the dd-dimensional covariates are sampled i.i.d. from the normal distribution, then the excess error of DP-AMBSSGD due to privacy is σ2dN(1+dϵ2N)\frac{\sigma^2 d}{N}(1+\frac{d}{\epsilon^2 N}), i.e., the error is meaningful when number of samples N=Ω(dlogd)N= \Omega(d \log d) which is the standard operative regime for linear regression. In contrast, error bounds for existing efficient methods in this setting are: O(d3ϵ2N2)\mathcal{O}\big(\frac{d^3}{\epsilon^2 N^2}\big), even for σ=0\sigma=0. That is, for constant ϵ\epsilon, the existing techniques require N=Ω(dd)N=\Omega(d\sqrt{d}) to provide a non-trivial result.

View on arXiv
Comments on this paper