Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.13710
Cited By
Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
27 May 2022
Jason M. Altschuler
Kunal Talwar
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss"
18 / 18 papers shown
Title
An Improved Privacy and Utility Analysis of Differentially Private SGD with Bounded Domain and Smooth Losses
Hao Liang
W. Zhang
Xinlei He
Kaishun He
Hong Xing
47
0
0
25 Feb 2025
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
85
0
0
02 Dec 2024
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
Thomas Steinke
Milad Nasr
Arun Ganesh
Borja Balle
Christopher A. Choquette-Choo
Matthew Jagielski
Jamie Hayes
Abhradeep Thakurta
Adam Smith
Andreas Terzis
34
7
0
08 Oct 2024
Privacy of the last iterate in cyclically-sampled DP-SGD on nonconvex composite losses
Weiwei Kong
Mónica Ribero
26
3
0
07 Jul 2024
Differentially Private Graph Diffusion with Applications in Personalized PageRanks
Rongzhe Wei
Eli Chien
P. Li
39
5
0
22 Jun 2024
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Umut Simsekli
Mert Gurbuzbalaban
S. Yıldırım
Lingjiong Zhu
38
2
0
04 Mar 2024
Tight Group-Level DP Guarantees for DP-SGD with Sampling via Mixture of Gaussians Mechanisms
Arun Ganesh
19
2
0
17 Jan 2024
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Anvith Thudi
Hengrui Jia
Casey Meehan
Ilia Shumailov
Nicolas Papernot
27
3
0
01 Jul 2023
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses
S. Asoodeh
Mario Díaz
15
6
0
17 May 2023
From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning
Edwige Cyffers
A. Bellet
D. Basu
FedML
13
5
0
24 Feb 2023
Tight Auditing of Differentially Private Machine Learning
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
35
52
0
15 Feb 2023
Bounding Training Data Reconstruction in DP-SGD
Jamie Hayes
Saeed Mahloujifar
Borja Balle
AAML
FedML
33
39
0
14 Feb 2023
Privacy Risk for anisotropic Langevin dynamics using relative entropy bounds
Anastasia Borovykh
N. Kantas
P. Parpas
G. Pavliotis
14
1
0
01 Feb 2023
Reconstructing Training Data from Model Gradient, Provably
Zihan Wang
Jason D. Lee
Qi Lei
FedML
22
24
0
07 Dec 2022
Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling
Jason M. Altschuler
Kunal Talwar
30
24
0
16 Oct 2022
Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)
Jiayuan Ye
Reza Shokri
FedML
22
44
0
10 Mar 2022
Private Convex Optimization via Exponential Mechanism
Sivakanth Gopi
Y. Lee
Daogao Liu
83
52
0
01 Mar 2022
Opacus: User-Friendly Differential Privacy Library in PyTorch
Ashkan Yousefpour
I. Shilov
Alexandre Sablayrolles
Davide Testuggine
Karthik Prasad
...
Sayan Gosh
Akash Bharadwaj
Jessica Zhao
Graham Cormode
Ilya Mironov
VLM
152
349
0
25 Sep 2021
1