Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.05679
Cited By
Large Language Models Can Be Strong Differentially Private Learners
12 October 2021
Xuechen Li
Florian Tramèr
Percy Liang
Tatsunori Hashimoto
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Large Language Models Can Be Strong Differentially Private Learners"
9 / 59 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning
Da Yu
Huishuai Zhang
Wei Chen
Tie-Yan Liu
FedML
SILM
91
110
0
25 Feb 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
246
285
0
02 Feb 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,916
0
31 Dec 2020
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,812
0
14 Dec 2020
Private Post-GAN Boosting
Marcel Neunhoeffer
Zhiwei Steven Wu
Cynthia Dwork
114
29
0
23 Jul 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
408
2,584
0
03 Sep 2019
Efficient Per-Example Gradient Computations
Ian Goodfellow
184
74
0
07 Oct 2015
Previous
1
2