Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.12675
Cited By
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
29 January 2022
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models"
6 / 6 papers shown
Title
ReCIT: Reconstructing Full Private Data from Gradient in Parameter-Efficient Fine-Tuning of Large Language Models
Jin Xie
Ruishi He
Songze Li
Xiaojun Jia
Shouling Ji
SILM
AAML
50
0
0
29 Apr 2025
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
62
143
0
06 Dec 2021
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
142
358
0
14 Jul 2021
Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications
Matthias Paulik
M. Seigel
Henry Mason
Dominic Telaar
Joris Kluivers
...
Dominic Hughes
O. Javidbakht
Fei Dong
Rehan Rishi
Stanley Hung
FedML
144
111
0
16 Feb 2021
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
241
1,386
0
14 Dec 2020
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
212
80
0
11 Dec 2020
1