ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.01181
  4. Cited By
TMI! Finetuned Models Leak Private Information from their Pretraining
  Data

TMI! Finetuned Models Leak Private Information from their Pretraining Data

1 June 2023
John Abascal
Stanley Wu
Alina Oprea
Jonathan R. Ullman
ArXivPDFHTML

Papers citing "TMI! Finetuned Models Leak Private Information from their Pretraining Data"

4 / 4 papers shown
Title
Students Parrot Their Teachers: Membership Inference on Model
  Distillation
Students Parrot Their Teachers: Membership Inference on Model Distillation
Matthew Jagielski
Milad Nasr
Christopher A. Choquette-Choo
Katherine Lee
Nicholas Carlini
FedML
39
21
0
06 Mar 2023
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
346
0
13 Oct 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,808
0
14 Dec 2020
When is Memorization of Irrelevant Training Data Necessary for
  High-Accuracy Learning?
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
245
93
0
11 Dec 2020
1