ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.04211
63
2
v1v2 (latest)

Achieving Differential Privacy with Matrix Masking in Big Data

11 January 2022
A. Ding
Samuel S. Wu
G. Miao
Shigang Chen
ArXiv (abs)PDFHTML
Abstract

Differential privacy schemes have been widely adopted in recent years to address issues of data privacy protection. We propose a new Gaussian scheme combining with another data protection technique, called random orthogonal matrix masking, to achieve (ε,δ)(\varepsilon, \delta)(ε,δ)-differential privacy (DP) more efficiently. We prove that the additional matrix masking significantly reduces the rate of noise variance required in the Gaussian scheme to achieve (ε,δ)−(\varepsilon, \delta)-(ε,δ)−DP in big data setting. Specifically, when ε→0\varepsilon \to 0ε→0, δ→0\delta \to 0δ→0, and the sample size nnn exceeds the number ppp of attributes by np=O(ln(1/δ))\frac{n}{p}=O(ln(1/\delta))pn​=O(ln(1/δ)), the required additive noise variance to achieve (ε,δ)(\varepsilon, \delta)(ε,δ)-DP is reduced from O(ln(1/δ)/ε2)O(ln(1/\delta)/\varepsilon^2)O(ln(1/δ)/ε2) to O(1/ε)O(1/\varepsilon)O(1/ε). With much less noise added, the resulting differential privacy protected pseudo data sets allow much more accurate inferences, thus can significantly improve the scope of application for differential privacy.

View on arXiv
Comments on this paper