ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.05199
  4. Cited By
Bayesian Estimation of Differential Privacy

Bayesian Estimation of Differential Privacy

10 June 2022
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
A. Salem
Victor Rühle
Andrew J. Paverd
Mohammad Naseri
Boris Köpf
Daniel Jones
ArXivPDFHTML

Papers citing "Bayesian Estimation of Differential Privacy"

22 / 22 papers shown
Title
How Well Can Differential Privacy Be Audited in One Run?
Amit Keinan
Moshe Shenfeld
Katrina Ligett
61
0
0
10 Mar 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
80
0
0
24 Feb 2025
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
85
0
0
02 Dec 2024
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
Thomas Steinke
Milad Nasr
Arun Ganesh
Borja Balle
Christopher A. Choquette-Choo
Matthew Jagielski
Jamie Hayes
Abhradeep Thakurta
Adam Smith
Andreas Terzis
34
7
0
08 Oct 2024
Nearly Tight Black-Box Auditing of Differentially Private Machine
  Learning
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
Meenatchi Sundaram Muthu Selva Annamalai
Emiliano De Cristofaro
39
11
0
23 May 2024
"What do you want from theory alone?" Experimenting with Tight Auditing
  of Differentially Private Synthetic Data Generation
"What do you want from theory alone?" Experimenting with Tight Auditing of Differentially Private Synthetic Data Generation
Meenatchi Sundaram Muthu Selva Annamalai
Georgi Ganev
Emiliano De Cristofaro
35
9
0
16 May 2024
Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative
  Privacy Risk
Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk
Zhangheng Li
Junyuan Hong
Bo-wen Li
Zhangyang Wang
DiffM
38
17
0
14 Mar 2024
Synthesizing Tight Privacy and Accuracy Bounds via Weighted Model
  Counting
Synthesizing Tight Privacy and Accuracy Bounds via Weighted Model Counting
Lisa Oakley
Steven Holtzen
Alina Oprea
35
0
0
26 Feb 2024
Closed-Form Bounds for DP-SGD against Record-level Inference
Closed-Form Bounds for DP-SGD against Record-level Inference
Giovanni Cherubin
Boris Köpf
Andrew J. Paverd
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
33
2
0
22 Feb 2024
Revisiting Differentially Private Hyper-parameter Tuning
Revisiting Differentially Private Hyper-parameter Tuning
Zihang Xiang
Tianhao Wang
Cheng-Long Wang
Di Wang
34
6
0
20 Feb 2024
PANORAMIA: Privacy Auditing of Machine Learning Models without
  Retraining
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
Mishaal Kazmi
H. Lautraite
Alireza Akbari
Mauricio Soroco
Qiaoyue Tang
Tao Wang
Sébastien Gambs
Mathias Lécuyer
34
8
0
12 Feb 2024
Label Poisoning is All You Need
Label Poisoning is All You Need
Rishi Jha
J. Hayase
Sewoong Oh
AAML
22
28
0
29 Oct 2023
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Anvith Thudi
Hengrui Jia
Casey Meehan
Ilia Shumailov
Nicolas Papernot
24
3
0
01 Jul 2023
Gaussian Membership Inference Privacy
Gaussian Membership Inference Privacy
Tobias Leemann
Martin Pawelczyk
Gjergji Kasneci
22
15
0
12 Jun 2023
A Note On Interpreting Canary Exposure
A Note On Interpreting Canary Exposure
Matthew Jagielski
16
4
0
31 May 2023
Unleashing the Power of Randomization in Auditing Differentially Private
  ML
Unleashing the Power of Randomization in Auditing Differentially Private ML
Krishna Pillutla
Galen Andrew
Peter Kairouz
H. B. McMahan
Alina Oprea
Sewoong Oh
30
20
0
29 May 2023
Privacy Auditing with One (1) Training Run
Privacy Auditing with One (1) Training Run
Thomas Steinke
Milad Nasr
Matthew Jagielski
33
77
0
15 May 2023
Tight Auditing of Differentially Private Machine Learning
Tight Auditing of Differentially Private Machine Learning
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
32
52
0
15 Feb 2023
One-shot Empirical Privacy Estimation for Federated Learning
One-shot Empirical Privacy Estimation for Federated Learning
Galen Andrew
Peter Kairouz
Sewoong Oh
Alina Oprea
H. B. McMahan
Vinith M. Suriyakumar
FedML
21
32
0
06 Feb 2023
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David E. Evans
Boris Köpf
Andrew J. Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
44
35
0
21 Dec 2022
TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data
TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data
F. Houssiau
James Jordon
Samuel N. Cohen
Owen Daniel
Andrew Elliott
James Geddes
C. Mole
Camila Rangel Smith
Lukasz Szpruch
20
45
0
12 Nov 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
50
102
0
30 Jun 2022
1