50
0

Mitigating Data Poisoning Attacks to Local Differential Privacy

Main:14 Pages
12 Figures
Bibliography:1 Pages
6 Tables
Appendix:4 Pages
Abstract

The distributed nature of local differential privacy (LDP) invites data poisoning attacks and poses unforeseen threats to the underlying LDP-supported applications. In this paper, we propose a comprehensive mitigation framework for popular frequency estimation, which contains a suite of novel defenses, including malicious user detection, attack pattern recognition, and damaged utility recovery. In addition to existing attacks, we explore new adaptive adversarial activities for our mitigation design. For detection, we present a new method to precisely identify bogus reports and thus LDP aggregation can be performed over the ``clean'' data. When the attack behavior becomes stealthy and direct filtering out malicious users is difficult, we further propose a detection that can effectively recognize hidden adversarial patterns, thus facilitating the decision-making of service providers. These detection methods require no additional data and attack information and incur minimal computational cost. Our experiment demonstrates their excellent performance and substantial improvement over previous work in various settings. In addition, we conduct an empirical analysis of LDP post-processing for corrupted data recovery and propose a new post-processing method, through which we reveal new insights into protocol recommendations in practice and key design principles for future research.

View on arXiv
@article{li2025_2506.02156,
  title={ Mitigating Data Poisoning Attacks to Local Differential Privacy },
  author={ Xiaolin Li and Ninghui Li and Boyang Wang and Wenhai Sun },
  journal={arXiv preprint arXiv:2506.02156},
  year={ 2025 }
}
Comments on this paper