Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.06470
Cited By
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
13 December 2022
Florian Tramèr
Gautam Kamath
Nicholas Carlini
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining"
17 / 17 papers shown
Title
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Pratiksha Thaker
Shengyuan Hu
Neil Kale
Yash Maurya
Zhiwei Steven Wu
Virginia Smith
MU
39
10
0
03 Oct 2024
Differentially Private Active Learning: Balancing Effective Data Selection and Privacy
Kristian Schwethelm
Johannes Kaiser
Jonas Kuntzer
Mehmet Yigitsoy
Daniel Rueckert
Georgios Kaissis
25
0
0
01 Oct 2024
Better Locally Private Sparse Estimation Given Multiple Samples Per User
Yuheng Ma
Ke Jia
Hanfang Yang
FedML
28
1
0
08 Aug 2024
Noise-Aware Differentially Private Regression via Meta-Learning
Ossi Raisa
Stratis Markou
Matthew Ashman
W. Bruinsma
Marlon Tobaben
Antti Honkela
Richard E. Turner
51
1
0
12 Jun 2024
Reconstructing training data from document understanding models
Jérémie Dentan
Arnaud Paran
A. Shabou
AAML
SyDa
32
1
0
05 Jun 2024
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Xinyu Tang
Ashwinee Panda
Milad Nasr
Saeed Mahloujifar
Prateek Mittal
42
18
0
09 Jan 2024
PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretraining
Kecen Li
Chen Gong
Zhixiang Li
Yuzhong Zhao
Xinwen Hou
Tianhao Wang
17
10
0
19 Oct 2023
Private Distribution Learning with Public Data: The View from Sample Compression
Shai Ben-David
Alex Bie
C. Canonne
Gautam Kamath
Vikrant Singhal
14
11
0
11 Aug 2023
PILLAR: How to make semi-private learning more effective
Francesco Pinto
Yaxian Hu
Fanny Yang
Amartya Sanyal
17
11
0
06 Jun 2023
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
26
31
0
30 Sep 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
64
143
0
06 Dec 2021
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
131
258
0
13 Oct 2021
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
234
447
0
14 Jul 2021
Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning
Da Yu
Huishuai Zhang
Wei Chen
Tie-Yan Liu
FedML
SILM
91
110
0
25 Feb 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
239
1,508
0
31 Dec 2020
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
264
1,798
0
14 Dec 2020
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Florian Tramèr
Dan Boneh
FedML
110
391
0
08 Jun 2018
1