593
v1v2 (latest)

Context-Aware Membership Inference Attacks against Pre-trained Large Language Models

Main:9 Pages
9 Figures
Bibliography:3 Pages
17 Tables
Appendix:11 Pages
Abstract

Membership Inference Attacks (MIAs) on pre-trained Large Language Models (LLMs) aim at determining if a data point was part of the model's training set. Prior MIAs that are built for classification models fail at LLMs, due to ignoring the generative nature of LLMs across token sequences. In this paper, we present a novel attack on pre-trained LLMs that adapts MIA statistical tests to the perplexity dynamics of subsequences within a data point. Our method significantly outperforms prior approaches, revealing context-dependent memorization patterns in pre-trained LLMs.

View on arXiv
Comments on this paper