Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.01231
Cited By
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
1 April 2024
Yuxin Wen
Leo Marchyok
Sanghyun Hong
Jonas Geiping
Tom Goldstein
Nicholas Carlini
SILM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models"
6 / 6 papers shown
Title
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions
Hao Du
Shang Liu
Lele Zheng
Yang Cao
Atsuyoshi Nakamura
Lei Chen
AAML
114
3
0
21 Dec 2024
Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage
Md. Rafi Ur Rashid
Jing Liu
T. Koike-Akino
Shagufta Mehnaz
Ye Wang
MU
SILM
36
3
0
30 Aug 2024
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
79
92
0
01 Feb 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
56
0
29 Jan 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
69
181
0
06 Dec 2021
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,812
0
14 Dec 2020
1