We present a fairly general framework for reducing differentially private (DP) statistical estimation to its non-private counterpart. As the main application of this framework, we give a polynomial time and -DP algorithm for learning (unrestricted) Gaussian distributions in . The sample complexity of our approach for learning the Gaussian up to total variation distance is matching (up to logarithmic factors) the best known information-theoretic (non-efficient) sample complexity upper bound due to Aden-Ali, Ashtiani, and Kamath (ALT'21). In an independent work, Kamath, Mouzakis, Singhal, Steinke, and Ullman (arXiv:2111.04609) proved a similar result using a different approach and with sample complexity dependence on . As another application of our framework, we provide the first polynomial time -DP algorithm for robust learning of (unrestricted) Gaussians with sample complexity . In another independent work, Kothari, Manurangsi, and Velingker (arXiv:2112.03548) also provided a polynomial time -DP algorithm for robust learning of Gaussians with sample complexity .
View on arXiv