ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.01501
98
6
v1v2 (latest)

The Privacy Funnel from the viewpoint of Local Differential Privacy

4 February 2020
Milan Lopuhaä-Zwakenberg
    FedML
ArXiv (abs)PDFHTML
Abstract

We consider a database X⃗=(X1,⋯ ,Xn)\vec{X} = (X_1,\cdots,X_n)X=(X1​,⋯,Xn​) containing the data of nnn users. The data aggregator wants to publicise the database, but wishes to sanitise the dataset to hide sensitive data SiS_iSi​ correlated to XiX_iXi​. This setting is considered in the Privacy Funnel, which uses mutual information as a leakage metric. The downsides to this approach are that mutual information does not give worst-case guarantees, and that finding optimal sanitisation protocols can be computationally prohibitive. We tackle these problems by using differential privacy metrics, and by considering local protocols which operate on one entry at a time. We show that under both the Local Differential Privacy and Local Information Privacy leakage metrics, one can efficiently obtain optimal protocols; however, Local Information Privacy is both more closely aligned to the privacy requirements of the Privacy Funnel scenario, and more efficiently computable. We also consider the scenario where each user has multiple attributes (i.e. Xi=(Xi1,⋯ ,Xim)X_i = (X^1_i,\cdots,X^m_i)Xi​=(Xi1​,⋯,Xim​)), for which we define \emph{Side-channel Resistant Local Information Privacy}, and we give efficient methods to find protocols satisfying this criterion while still offering good utility. Exploratory experiments confirm the validity of these methods.

View on arXiv
Comments on this paper