Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.09639
Cited By
DPZero: Private Fine-Tuning of Language Models without Backpropagation
14 October 2023
Liang Zhang
Bingcong Li
K. K. Thekumparampil
Sewoong Oh
Niao He
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DPZero: Private Fine-Tuning of Language Models without Backpropagation"
14 / 14 papers shown
Title
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
64
1
0
09 Oct 2024
Privacy-preserving Fine-tuning of Large Language Models through Flatness
Tiejin Chen
Longchao Da
Huixue Zhou
Pingzhi Li
Kaixiong Zhou
Tianlong Chen
Hua Wei
26
5
0
07 Mar 2024
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Xinyu Tang
Ashwinee Panda
Milad Nasr
Saeed Mahloujifar
Prateek Mittal
42
18
0
09 Jan 2024
Zero redundancy distributed learning with differential privacy
Zhiqi Bu
Justin Chiu
Ruixuan Liu
Sheng Zha
George Karypis
32
4
0
20 Nov 2023
Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness
E. Zelikman
Qian Huang
Percy Liang
Nick Haber
Noah D. Goodman
59
14
0
16 Jun 2023
Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
Anastasia Koloskova
Hadrien Hendrikx
Sebastian U. Stich
99
48
0
02 May 2023
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Tianyi Lin
Zeyu Zheng
Michael I. Jordan
39
50
0
12 Sep 2022
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David E. Evans
Taylor Berg-Kirkpatrick
AAML
58
39
0
25 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning
Wenzhi Fang
Ziyi Yu
Yuning Jiang
Yuanming Shi
Colin N. Jones
Yong Zhou
FedML
71
53
0
24 Jan 2022
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
131
258
0
13 Oct 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,190
0
16 Aug 2016
1