ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13872
55
0

Empirical Calibration and Metric Differential Privacy in Language Models

18 March 2025
Pedro Faustini
Natasha Fernandes
Annabelle McIver
Mark Dras
ArXivPDFHTML
Abstract

NLP models trained with differential privacy (DP) usually adopt the DP-SGD framework, and privacy guarantees are often reported in terms of the privacy budget ϵ\epsilonϵ. However, ϵ\epsilonϵ does not have any intrinsic meaning, and it is generally not possible to compare across variants of the framework. Work in image processing has therefore explored how to empirically calibrate noise across frameworks using Membership Inference Attacks (MIAs). However, this kind of calibration has not been established for NLP. In this paper, we show that MIAs offer little help in calibrating privacy, whereas reconstruction attacks are more useful. As a use case, we define a novel kind of directional privacy based on the von Mises-Fisher (VMF) distribution, a metric DP mechanism that perturbs angular distance rather than adding (isotropic) Gaussian noise, and apply this to NLP architectures. We show that, even though formal guarantees are incomparable, empirical privacy calibration reveals that each mechanism has different areas of strength with respect to utility-privacy trade-offs.

View on arXiv
@article{faustini2025_2503.13872,
  title={ Empirical Calibration and Metric Differential Privacy in Language Models },
  author={ Pedro Faustini and Natasha Fernandes and Annabelle McIver and Mark Dras },
  journal={arXiv preprint arXiv:2503.13872},
  year={ 2025 }
}
Comments on this paper