48
1

Has My System Prompt Been Used? Large Language Model Prompt Membership Inference

Abstract

Prompt engineering has emerged as a powerful technique for optimizing large language models (LLMs) for specific applications, enabling faster prototyping and improved performance, and giving rise to the interest of the community in protecting proprietary system prompts. In this work, we explore a novel perspective on prompt privacy through the lens of membership inference. We develop Prompt Detective, a statistical method to reliably determine whether a given system prompt was used by a third-party language model. Our approach relies on a statistical test comparing the distributions of two groups of model outputs corresponding to different system prompts. Through extensive experiments with a variety of language models, we demonstrate the effectiveness of Prompt Detective for prompt membership inference. Our work reveals that even minor changes in system prompts manifest in distinct response distributions, enabling us to verify prompt usage with statistical significance.

View on arXiv
@article{levin2025_2502.09974,
  title={ Has My System Prompt Been Used? Large Language Model Prompt Membership Inference },
  author={ Roman Levin and Valeriia Cherepanova and Abhimanyu Hans and Avi Schwarzschild and Tom Goldstein },
  journal={arXiv preprint arXiv:2502.09974},
  year={ 2025 }
}
Comments on this paper