ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.09139
59
4

The Privacy-Utility Tradeoff of Robust Local Differential Privacy

22 January 2021
Milan Lopuhaä-Zwakenberg
J. Goseling
ArXiv (abs)PDFHTML
Abstract

We consider data release protocols for data X=(S,U)X=(S,U)X=(S,U), where SSS is sensitive; the released data YYY contains as much information about XXX as possible, measured as I⁡(X;Y)\operatorname{I}(X;Y)I(X;Y), without leaking too much about SSS. We introduce the Robust Local Differential Privacy (RLDP) framework to measure privacy. This framework relies on the underlying distribution of the data, which needs to be estimated from available data. Robust privacy guarantees are ensuring privacy for all distributions in a given set F\mathcal{F}F, for which we study two cases: when F\mathcal{F}F is the set of all distributions, and when F\mathcal{F}F is a confidence set arising from a χ2\chi^2χ2 test on a publicly available dataset. In the former case we introduce a new release protocol which we prove to be optimal in the low privacy regime. In the latter case we present four algorithms that construct RLDP protocols from a given dataset. One of these approximates F\mathcal{F}F by a polytope and uses results from robust optimisation to yield high utility release protocols. However, this algorithm relies on vertex enumeration and becomes computationally inaccessible for large input spaces. The other three algorithms are low-complexity and build on randomised response. Experiments verify that all four algorithms offer significantly improved utility over regular LDP.

View on arXiv
Comments on this paper