35
1

Range Membership Inference Attacks

Abstract

Machine learning models can leak private information about their training data. The standard methods to measure this privacy risk, based on membership inference attacks (MIAs), only check if a given data point \textit{exactly} matches a training point, neglecting the potential of similar or partially overlapping memorized data revealing the same private information. To address this issue, we introduce the class of range membership inference attacks (RaMIAs), testing if the model was trained on any data in a specified range (defined based on the semantics of privacy). We formulate the RaMIAs game and design a principled statistical test for its composite hypotheses. We show that RaMIAs can capture privacy loss more accurately and comprehensively than MIAs on various types of data, such as tabular, image, and language. RaMIA paves the way for more comprehensive and meaningful privacy auditing of machine learning algorithms.

View on arXiv
@article{tao2025_2408.05131,
  title={ Range Membership Inference Attacks },
  author={ Jiashu Tao and Reza Shokri },
  journal={arXiv preprint arXiv:2408.05131},
  year={ 2025 }
}
Comments on this paper