ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.03449
29
28

Locally Differentially Private Sparse Vector Aggregation

7 December 2021
Mingxun Zhou
Tianhao Wang
T-H. Hubert Chan
Giulia Fanti
E. Shi
    FedML
ArXivPDFHTML
Abstract

Vector mean estimation is a central primitive in federated analytics. In vector mean estimation, each user i∈[n]i \in [n]i∈[n] holds a real-valued vector vi∈[−1,1]dv_i\in [-1, 1]^dvi​∈[−1,1]d, and a server wants to estimate the mean of all nnn vectors. Not only so, we would like to protect each individual user's privacy. In this paper, we consider the kkk-sparse version of the vector mean estimation problem, that is, suppose that each user's vector has at most kkk non-zero coordinates in its ddd-dimensional vector, and moreover, k≪dk \ll dk≪d. In practice, since the universe size ddd can be very large (e.g., the space of all possible URLs), we would like the per-user communication to be succinct, i.e., independent of or (poly-)logarithmic in the universe size. In this paper, we are the first to show matching upper- and lower-bounds for the kkk-sparse vector mean estimation problem under local differential privacy. Specifically, we construct new mechanisms that achieve asymptotically optimal error as well as succinct communication, either under user-level-LDP or event-level-LDP. We implement our algorithms and evaluate them on synthetic as well as real-world datasets. Our experiments show that we can often achieve one or two orders of magnitude reduction in error in comparison with prior works under typical choices of parameters, while incurring insignificant communication cost.

View on arXiv
Comments on this paper