Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.05886
Cited By
Differentially Private Language Models Benefit from Public Pre-training
13 September 2020
Gavin Kerrigan
Dylan Slack
Jens Tuyls
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Differentially Private Language Models Benefit from Public Pre-training"
38 / 38 papers shown
Title
NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation
Rob Romijnders
Stefanos Laskaridis
Ali Shahin Shamsabadi
Hamed Haddadi
57
0
0
25 Apr 2025
DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs
Tamim Al Mahmud
N. Jebreel
Josep Domingo-Ferrer
David Sánchez
MU
27
0
0
18 Apr 2025
Empirical Calibration and Metric Differential Privacy in Language Models
Pedro Faustini
Natasha Fernandes
Annabelle McIver
Mark Dras
60
0
0
18 Mar 2025
On the Impact of Noise in Differentially Private Text Rewriting
Stephen Meisenbacher
Maulik Chevli
Florian Matthes
58
0
0
31 Jan 2025
Privately Learning from Graphs with Applications in Fine-tuning Large Language Models
Haoteng Yin
Rongzhe Wei
Eli Chien
P. Li
28
0
0
10 Oct 2024
Fine-Tuning Language Models with Differential Privacy through Adaptive Noise Allocation
Xianzhi Li
Ran Zmigrod
Zhiqiang Ma
Xiaomo Liu
Xiaodan Zhu
9
1
0
03 Oct 2024
Undesirable Memorization in Large Language Models: A Survey
Ali Satvaty
Suzan Verberne
Fatih Turkmen
ELM
PILM
71
7
0
03 Oct 2024
Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting
Stephen Meisenbacher
Florian Matthes
18
2
0
01 Oct 2024
DP-MLM: Differentially Private Text Rewriting Using Masked Language Models
Stephen Meisenbacher
Maulik Chevli
Juraj Vladika
Florian Matthes
39
7
0
30 Jun 2024
IDT: Dual-Task Adversarial Attacks for Privacy Protection
Pedro Faustini
Shakila Mahjabin Tonni
Annabelle McIver
Qiongkai Xu
Mark Dras
SILM
AAML
44
0
0
28 Jun 2024
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
Charlie Hou
Akshat Shrivastava
Hongyuan Zhan
Rylan Conway
Trang Le
Adithya Sagar
Giulia Fanti
Daniel Lazar
26
8
0
05 Jun 2024
Advances in Differential Privacy and Differentially Private Machine Learning
Saswat Das
Subhankar Mishra
22
3
0
06 Apr 2024
LLM-based Privacy Data Augmentation Guided by Knowledge Distillation with a Distribution Tutor for Medical Text Classification
Yiping Song
Juhua Zhang
Zhiliang Tian
Yuxin Yang
Minlie Huang
Dongsheng Li
34
10
0
26 Feb 2024
ConfusionPrompt: Practical Private Inference for Online Large Language Models
Peihua Mai
Ran Yan
Rui Ye
Youjia Yang
Yinchuan Li
Yan Pang
15
1
0
30 Dec 2023
Locally Differentially Private Document Generation Using Zero Shot Prompting
Saiteja Utpala
Sara Hooker
Pin-Yu Chen
13
36
0
24 Oct 2023
Split-and-Denoise: Protect large language model inference with local differential privacy
Peihua Mai
Ran Yan
Zhe Huang
Youjia Yang
Yan Pang
27
10
0
13 Oct 2023
LatticeGen: A Cooperative Framework which Hides Generated Text in a Lattice for Privacy-Aware Generation on Cloud
Mengke Zhang
Tianxing He
Tianle Wang
Lu Mi
Fatemehsadat Mireshghallah
Binyi Chen
Hao Wang
Yulia Tsvetkov
32
0
0
29 Sep 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
32
24
0
27 Sep 2023
Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
Phillip Rust
Anders Søgaard
25
3
0
17 Aug 2023
Selective Pre-training for Private Fine-tuning
Da Yu
Sivakanth Gopi
Janardhan Kulkarni
Zi-Han Lin
Saurabh Naik
Tomasz Religa
Jian Yin
Huishuai Zhang
30
19
0
23 May 2023
Can Public Large Language Models Help Private Cross-device Federated Learning?
Boxin Wang
Yibo Zhang
Yuan Cao
Bo-wen Li
H. B. McMahan
Sewoong Oh
Zheng Xu
Manzil Zaheer
FedML
21
37
0
20 May 2023
Privacy-Preserving Prompt Tuning for Large Language Model Services
Yansong Li
Zhixing Tan
Yang Liu
SILM
VLM
45
63
0
10 May 2023
Why Is Public Pretraining Necessary for Private Model Training?
Arun Ganesh
Mahdi Haghifam
Milad Nasr
Sewoong Oh
Thomas Steinke
Om Thakkar
Abhradeep Thakurta
Lun Wang
16
36
0
19 Feb 2023
Efficiency 360: Efficient Vision Transformers
Badri N. Patro
Vijay Srinivas Agneeswaran
21
6
0
16 Feb 2023
Context-Aware Differential Privacy for Language Modeling
M. H. Dinh
Ferdinando Fioretto
23
2
0
28 Jan 2023
Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping
Jiyan He
Xuechen Li
Da Yu
Huishuai Zhang
Janardhan Kulkarni
Y. Lee
A. Backurs
Nenghai Yu
Jiang Bian
14
46
0
03 Dec 2022
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
66
85
0
14 Oct 2022
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
Tianyu Chen
Hangbo Bao
Shaohan Huang
Li Dong
Binxing Jiao
Daxin Jiang
Haoyi Zhou
Jianxin Li
Furu Wei
15
96
0
01 Jun 2022
Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Christopher Ré
FedML
11
6
0
27 May 2022
Sentence-level Privacy for Document Embeddings
Casey Meehan
Khalil Mrini
Kamalika Chaudhuri
11
19
0
10 May 2022
The Impact of Differential Privacy on Group Disparity Mitigation
Victor Petrén Bach Hansen
A. Neerkaje
Ramit Sawhney
Lucie Flek
Anders Søgaard
40
9
0
05 Mar 2022
Submix: Practical Private Prediction for Large-Scale Language Models
Antonio A. Ginart
L. V. D. van der Maaten
James Y. Zou
Chuan Guo
20
22
0
04 Jan 2022
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
346
0
13 Oct 2021
Large Language Models Can Be Strong Differentially Private Learners
Xuechen Li
Florian Tramèr
Percy Liang
Tatsunori Hashimoto
22
365
0
12 Oct 2021
Learning Domain Specific Language Models for Automatic Speech Recognition through Machine Translation
Saurav Jha
12
1
0
21 Sep 2021
Selective Differential Privacy for Language Modeling
Weiyan Shi
Aiqi Cui
Evan Li
R. Jia
Zhou Yu
13
68
0
30 Aug 2021
DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing
Wenxiao Wang
Tianhao Wang
Lun Wang
Nanqing Luo
Pan Zhou
D. Song
R. Jia
4
16
0
02 Mar 2021
KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models
Yuta Nakamura
S. Hanaoka
Y. Nomura
Naoto Hayashi
O. Abe
Shuntaro Yada
Shoko Wakamiya
Nara Institute of Science
MIACV
14
8
0
31 Dec 2020
1