ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06808
54
5

Privacy Auditing of Large Language Models

9 March 2025
Ashwinee Panda
Xinyu Tang
Milad Nasr
Christopher A. Choquette-Choo
Prateek Mittal
    PILM
ArXivPDFHTML
Abstract

Current techniques for privacy auditing of large language models (LLMs) have limited efficacy -- they rely on basic approaches to generate canaries which leads to weak membership inference attacks that in turn give loose lower bounds on the empirical privacy leakage. We develop canaries that are far more effective than those used in prior work under threat models that cover a range of realistic settings. We demonstrate through extensive experiments on multiple families of fine-tuned LLMs that our approach sets a new standard for detection of privacy leakage. For measuring the memorization rate of non-privately trained LLMs, our designed canaries surpass prior approaches. For example, on the Qwen2.5-0.5B model, our designed canaries achieve 49.6%49.6\%49.6% TPR at 1%1\%1% FPR, vastly surpassing the prior approach's 4.2%4.2\%4.2% TPR at 1%1\%1% FPR. Our method can be used to provide a privacy audit of ε≈1\varepsilon \approx 1ε≈1 for a model trained with theoretical ε\varepsilonε of 4. To the best of our knowledge, this is the first time that a privacy audit of LLM training has achieved nontrivial auditing success in the setting where the attacker cannot train shadow models, insert gradient canaries, or access the model at every iteration.

View on arXiv
@article{panda2025_2503.06808,
  title={ Privacy Auditing of Large Language Models },
  author={ Ashwinee Panda and Xinyu Tang and Milad Nasr and Christopher A. Choquette-Choo and Prateek Mittal },
  journal={arXiv preprint arXiv:2503.06808},
  year={ 2025 }
}
Comments on this paper