ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01863
  4. Cited By
Provably Confidential Language Modelling

Provably Confidential Language Modelling

4 May 2022
Xuandong Zhao
Lei Li
Yu-Xiang Wang
    MU
ArXivPDFHTML

Papers citing "Provably Confidential Language Modelling"

12 / 12 papers shown
Title
Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon
Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon
USVSN Sai Prashanth
Alvin Deng
Kyle O'Brien
Jyothir S V
Mohammad Aflah Khan
...
Jacob Ray Fuehne
Stella Biderman
Tracy Ke
Katherine Lee
Naomi Saphra
55
12
0
25 Jun 2024
Be like a Goldfish, Don't Memorize! Mitigating Memorization in
  Generative LLMs
Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Abhimanyu Hans
Yuxin Wen
Neel Jain
John Kirchenbauer
Hamid Kazemi
...
Siddharth Singh
Gowthami Somepalli
Jonas Geiping
A. Bhatele
Tom Goldstein
31
30
0
14 Jun 2024
PrE-Text: Training Language Models on Private Federated Data in the Age
  of LLMs
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
Charlie Hou
Akshat Shrivastava
Hongyuan Zhan
Rylan Conway
Trang Le
Adithya Sagar
Giulia Fanti
Daniel Lazar
24
8
0
05 Jun 2024
DE-COP: Detecting Copyrighted Content in Language Models Training Data
DE-COP: Detecting Copyrighted Content in Language Models Training Data
André V. Duarte
Xuandong Zhao
Arlindo L. Oliveira
Lei Li
32
32
0
15 Feb 2024
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
19
211
0
01 Feb 2023
Preventing Verbatim Memorization in Language Models Gives a False Sense
  of Privacy
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Daphne Ippolito
Florian Tramèr
Milad Nasr
Chiyuan Zhang
Matthew Jagielski
Katherine Lee
Christopher A. Choquette-Choo
Nicholas Carlini
PILM
MU
23
58
0
31 Oct 2022
Synthetic Text Generation with Differential Privacy: A Simple and
  Practical Recipe
Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
Xiang Yue
Huseyin A. Inan
Xuechen Li
Girish Kumar
Julia McAnallen
Hoda Shajari
Huan Sun
David Levitan
Robert Sim
32
79
0
25 Oct 2022
Doubly Fair Dynamic Pricing
Doubly Fair Dynamic Pricing
Jianyu Xu
Dan Qiao
Yu-Xiang Wang
11
8
0
23 Sep 2022
Just Fine-tune Twice: Selective Differential Privacy for Large Language
  Models
Just Fine-tune Twice: Selective Differential Privacy for Large Language Models
Weiyan Shi
Ryan Shea
Si-An Chen
Chiyuan Zhang
R. Jia
Zhou Yu
AAML
18
38
0
15 Apr 2022
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Nikhil Kandpal
Eric Wallace
Colin Raffel
PILM
MU
17
273
0
14 Feb 2022
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
237
590
0
14 Jul 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,808
0
14 Dec 2020
1