Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.01181
Cited By
TMI! Finetuned Models Leak Private Information from their Pretraining Data
1 June 2023
John Abascal
Stanley Wu
Alina Oprea
Jonathan R. Ullman
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TMI! Finetuned Models Leak Private Information from their Pretraining Data"
4 / 4 papers shown
Title
Students Parrot Their Teachers: Membership Inference on Model Distillation
Matthew Jagielski
Milad Nasr
Christopher A. Choquette-Choo
Katherine Lee
Nicholas Carlini
FedML
39
21
0
06 Mar 2023
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
344
0
13 Oct 2021
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,808
0
14 Dec 2020
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
245
80
0
11 Dec 2020
1