ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.08238
278
12
v1v2v3v4v5 (latest)

Toward Evaluating Re-identification Risks in the Local Privacy Model

Transactions on Data Privacy (TDP), 2020
16 October 2020
Takao Murakami
Kenta Takahashi
    AAML
ArXiv (abs)PDFHTML
Abstract

LDP (Local Differential Privacy) has recently attracted much attention as a metric of data privacy that prevents the inference of personal data from obfuscated data in the local model. However, there are scenarios in which the adversary needs to perform re-identification attacks to link the obfuscated data to users in this model. LDP can cause excessive obfuscation and destroy the utility in these scenarios, because it is not designed to directly prevent re-identification. In this paper, we propose a privacy metric which we call the PIE (Personal Information Entropy). The PIE is designed so that it directly prevents re-identification attacks in the local model. It lower-bounds the lowest possible re-identification error probability (i.e., Bayes error probability) of the adversary. We analyze the relation between LDP and the PIE, and analyze the PIE and utility in distribution estimation for two obfuscation mechanisms providing LDP. Through experiments, we show that LDP fails to guarantee meaningful privacy and utility in distribution estimation. Then we show that the PIE can be used to guarantee low reidentification risks for the local obfuscation mechanisms while keeping high utility.

View on arXiv
Comments on this paper