Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.07956
Cited By
Tight Auditing of Differentially Private Machine Learning
15 February 2023
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Tight Auditing of Differentially Private Machine Learning"
34 / 34 papers shown
Title
Empirical Privacy Variance
Yuzheng Hu
Fan Wu
Ruicheng Xian
Yuhang Liu
Lydia Zakynthinou
Pritish Kamath
Chiyuan Zhang
David A. Forsyth
62
0
0
16 Mar 2025
(
ε
,
δ
)
(\varepsilon, δ)
(
ε
,
δ
)
Considered Harmful: Best Practices for Reporting Differential Privacy Guarantees
Juan Felipe Gomez
B. Kulynych
G. Kaissis
Jamie Hayes
Borja Balle
Antti Honkela
51
0
0
13 Mar 2025
Privacy Auditing of Large Language Models
Ashwinee Panda
Xinyu Tang
Milad Nasr
Christopher A. Choquette-Choo
Prateek Mittal
PILM
62
5
0
09 Mar 2025
General-Purpose
f
f
f
-DP Estimation and Auditing in a Black-Box Setting
Önder Askin
Holger Dette
Martin Dunsche
T. Kutta
Yun Lu
Yu Wei
Vassilis Zikas
52
0
0
10 Feb 2025
Safeguarding System Prompts for LLMs
Zhifeng Jiang
Zhihua Jin
Guoliang He
AAML
SILM
103
1
0
10 Jan 2025
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
85
0
0
02 Dec 2024
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
Thomas Steinke
Milad Nasr
Arun Ganesh
Borja Balle
Christopher A. Choquette-Choo
Matthew Jagielski
Jamie Hayes
Abhradeep Thakurta
Adam Smith
Andreas Terzis
28
7
0
08 Oct 2024
Mitigating Noise Detriment in Differentially Private Federated Learning with Model Pre-training
Huitong Jin
Yipeng Zhou
Laizhong Cui
Quan Z. Sheng
AI4CE
40
0
0
18 Aug 2024
Synthetic Data, Similarity-based Privacy Metrics, and Regulatory (Non-)Compliance
Georgi Ganev
32
0
0
24 Jul 2024
Weights Shuffling for Improving DPSGD in Transformer-based Models
Jungang Yang
Zhe Ji
Liyao Xiang
33
0
0
22 Jul 2024
A Benchmark for Multi-speaker Anonymization
Xiaoxiao Miao
Ruijie Tao
Chang Zeng
Xin Wang
44
1
0
08 Jul 2024
Attack-Aware Noise Calibration for Differential Privacy
B. Kulynych
Juan Felipe Gomez
G. Kaissis
Flavio du Pin Calmon
Carmela Troncoso
49
6
0
02 Jul 2024
Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations
Vasisht Duddu
Oskari Jarvinen
Lachlan J. Gunn
Nirmal Asokan
64
1
0
25 Jun 2024
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
Tudor Cebere
A. Bellet
Nicolas Papernot
28
9
0
23 May 2024
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
Meenatchi Sundaram Muthu Selva Annamalai
Emiliano De Cristofaro
25
11
0
23 May 2024
Data Contamination Calibration for Black-box LLMs
Wen-song Ye
Jiaqi Hu
Liyao Li
Haobo Wang
Gang Chen
Junbo Zhao
34
6
0
20 May 2024
"What do you want from theory alone?" Experimenting with Tight Auditing of Differentially Private Synthetic Data Generation
Meenatchi Sundaram Muthu Selva Annamalai
Georgi Ganev
Emiliano De Cristofaro
35
9
0
16 May 2024
Bridging Quantum Computing and Differential Privacy: Insights into Quantum Computing Privacy
Yusheng Zhao
Hui Zhong
Xinyue Zhang
Yuqing Li
Chi Zhang
Miao Pan
28
3
0
14 Mar 2024
Visual Privacy Auditing with Diffusion Models
Kristian Schwethelm
Johannes Kaiser
Moritz Knolle
Daniel Rueckert
Daniel Rueckert
Alexander Ziller
DiffM
AAML
33
0
0
12 Mar 2024
Synthesizing Tight Privacy and Accuracy Bounds via Weighted Model Counting
Lisa Oakley
Steven Holtzen
Alina Oprea
30
0
0
26 Feb 2024
Privacy-Preserving Instructions for Aligning Large Language Models
Da Yu
Peter Kairouz
Sewoong Oh
Zheng Xu
32
17
0
21 Feb 2024
Auditing Private Prediction
Karan Chadha
Matthew Jagielski
Nicolas Papernot
Christopher A. Choquette-Choo
Milad Nasr
30
4
0
14 Feb 2024
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
Mishaal Kazmi
H. Lautraite
Alireza Akbari
Mauricio Soroco
Qiaoyue Tang
Tao Wang
Sébastien Gambs
Mathias Lécuyer
29
8
0
12 Feb 2024
Preserving Node-level Privacy in Graph Neural Networks
Zihang Xiang
Tianhao Wang
Di Wang
23
6
0
12 Nov 2023
Label Poisoning is All You Need
Rishi Jha
J. Hayase
Sewoong Oh
AAML
22
28
0
29 Oct 2023
Detecting Pretraining Data from Large Language Models
Weijia Shi
Anirudh Ajith
Mengzhou Xia
Yangsibo Huang
Daogao Liu
Terra Blevins
Danqi Chen
Luke Zettlemoyer
MIALM
23
161
0
25 Oct 2023
Revealing the True Cost of Locally Differentially Private Protocols: An Auditing Perspective
Héber H. Arcolezi
Sébastien Gambs
30
1
0
04 Sep 2023
Epsilon*: Privacy Metric for Machine Learning Models
Diana M. Negoescu
H. González
Saad Eddin Al Orjany
Jilei Yang
Yuliia Lut
...
Xinyi Zheng
Zachariah Douglas
Vidita Nolkha
P. Ahammad
G. Samorodnitsky
15
2
0
21 Jul 2023
DP-Auditorium: a Large Scale Library for Auditing Differential Privacy
William Kong
Andrés Munoz Medina
Mónica Ribero
Umar Syed
13
2
0
10 Jul 2023
Privacy Auditing with One (1) Training Run
Thomas Steinke
Milad Nasr
Matthew Jagielski
25
76
0
15 May 2023
A Randomized Approach for Tight Privacy Accounting
Jiachen T. Wang
Saeed Mahloujifar
Tong Wu
R. Jia
Prateek Mittal
28
9
0
17 Apr 2023
How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy
Natalia Ponomareva
Hussein Hazimeh
Alexey Kurakin
Zheng Xu
Carson E. Denison
H. B. McMahan
Sergei Vassilvitskii
Steve Chien
Abhradeep Thakurta
94
167
0
01 Mar 2023
One-shot Empirical Privacy Estimation for Federated Learning
Galen Andrew
Peter Kairouz
Sewoong Oh
Alina Oprea
H. B. McMahan
Vinith M. Suriyakumar
FedML
19
32
0
06 Feb 2023
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
Soroosh Tayebi Arasteh
Alexander Ziller
Christiane Kuhl
Marcus R. Makowski
S. Nebelung
R. Braren
Daniel Rueckert
Daniel Truhn
Georgios Kaissis
MedIm
32
17
0
03 Feb 2023
1