Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.09540
Cited By
Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?
14 February 2024
Andrew Lowy
Zhuohang Li
Jing Liu
T. Koike-Akino
K. Parsons
Ye Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?"
9 / 9 papers shown
Title
Personalized Federated Training of Diffusion Models with Privacy Guarantees
Kumar Kshitij Patel
Weitong Zhang
Lingxiao Wang
MedIm
50
0
0
01 Apr 2025
Tokens for Learning, Tokens for Unlearning: Mitigating Membership Inference Attacks in Large Language Models via Dual-Purpose Training
Toan Tran
Ruixuan Liu
Li Xiong
MU
41
0
0
27 Feb 2025
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
85
0
0
30 Dec 2024
Faster Algorithms for User-Level Private Stochastic Convex Optimization
Andrew Lowy
Daogao Liu
Hilal Asi
18
0
0
24 Oct 2024
Analyzing Inference Privacy Risks Through Gradients in Machine Learning
Zhuohang Li
Andrew Lowy
Jing Liu
T. Koike-Akino
K. Parsons
Bradley Malin
Ye Wang
FedML
30
1
0
29 Aug 2024
Private Collaborative Edge Inference via Over-the-Air Computation
Selim F. Yilmaz
Burak Hasircioglu
Li Qiao
Deniz Gunduz
FedML
48
1
0
30 Jul 2024
Semantic Membership Inference Attack against Large Language Models
Hamid Mozaffari
Virendra J. Marathe
MIALM
45
3
0
14 Jun 2024
Privacy-Preserving Instructions for Aligning Large Language Models
Da Yu
Peter Kairouz
Sewoong Oh
Zheng Xu
32
17
0
21 Feb 2024
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,808
0
14 Dec 2020
1